00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2086 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3351 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.024 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.058 Using shallow fetch with depth 1 00:00:00.058 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.058 > git --version # timeout=10 00:00:00.075 > git --version # 'git version 2.39.2' 00:00:00.075 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.092 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.092 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.890 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.902 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.914 Checking out Revision 3aaeb01851f3410c69bd29d15f29de9bbe186390 (FETCH_HEAD) 00:00:02.914 > git config core.sparsecheckout # timeout=10 00:00:02.925 > git read-tree -mu HEAD # timeout=10 00:00:02.940 > git checkout -f 3aaeb01851f3410c69bd29d15f29de9bbe186390 # timeout=5 00:00:02.957 Commit message: "jenkins/autotest: use known issue detector function from shm lib" 00:00:02.958 > git rev-list --no-walk 3aaeb01851f3410c69bd29d15f29de9bbe186390 # timeout=10 00:00:03.051 [Pipeline] Start of Pipeline 00:00:03.065 [Pipeline] library 00:00:03.067 Loading library shm_lib@master 00:00:03.067 Library shm_lib@master is cached. Copying from home. 00:00:03.086 [Pipeline] node 00:00:03.096 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.098 [Pipeline] { 00:00:03.109 [Pipeline] catchError 00:00:03.111 [Pipeline] { 00:00:03.124 [Pipeline] wrap 00:00:03.133 [Pipeline] { 00:00:03.142 [Pipeline] stage 00:00:03.145 [Pipeline] { (Prologue) 00:00:03.167 [Pipeline] echo 00:00:03.169 Node: VM-host-WFP7 00:00:03.176 [Pipeline] cleanWs 00:00:03.186 [WS-CLEANUP] Deleting project workspace... 00:00:03.186 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.193 [WS-CLEANUP] done 00:00:03.438 [Pipeline] setCustomBuildProperty 00:00:03.532 [Pipeline] httpRequest 00:00:03.551 [Pipeline] echo 00:00:03.553 Sorcerer 10.211.164.101 is alive 00:00:03.562 [Pipeline] retry 00:00:03.564 [Pipeline] { 00:00:03.576 [Pipeline] httpRequest 00:00:03.581 HttpMethod: GET 00:00:03.582 URL: http://10.211.164.101/packages/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:03.583 Sending request to url: http://10.211.164.101/packages/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:03.584 Response Code: HTTP/1.1 200 OK 00:00:03.584 Success: Status code 200 is in the accepted range: 200,404 00:00:03.585 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:03.893 [Pipeline] } 00:00:03.905 [Pipeline] // retry 00:00:03.912 [Pipeline] sh 00:00:04.192 + tar --no-same-owner -xf jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:04.208 [Pipeline] httpRequest 00:00:04.224 [Pipeline] echo 00:00:04.226 Sorcerer 10.211.164.101 is alive 00:00:04.237 [Pipeline] retry 00:00:04.239 [Pipeline] { 00:00:04.252 [Pipeline] httpRequest 00:00:04.256 HttpMethod: GET 00:00:04.257 URL: http://10.211.164.101/packages/spdk_7c739692e8d509752590f3602839174a24291913.tar.gz 00:00:04.257 Sending request to url: http://10.211.164.101/packages/spdk_7c739692e8d509752590f3602839174a24291913.tar.gz 00:00:04.258 Response Code: HTTP/1.1 200 OK 00:00:04.258 Success: Status code 200 is in the accepted range: 200,404 00:00:04.259 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_7c739692e8d509752590f3602839174a24291913.tar.gz 00:00:23.838 [Pipeline] } 00:00:23.856 [Pipeline] // retry 00:00:23.865 [Pipeline] sh 00:00:24.253 + tar --no-same-owner -xf spdk_7c739692e8d509752590f3602839174a24291913.tar.gz 00:00:26.798 [Pipeline] sh 00:00:27.081 + git -C spdk log --oneline -n5 00:00:27.081 7c739692e env_dpdk: restore opts_size after opts structure is zeroed 00:00:27.081 ff89983c5 script/rpc.py: Provide necessary params for bdev_compress_create 00:00:27.081 5dc1c71d6 util: add SPDK_FIELD_VALID() macro 00:00:27.081 3578b28c4 test/nvmf: add helper functions for establishing connections 00:00:27.081 fdd8cea26 nvmf/auth: don't disconnect qpairs on reauth timeout 00:00:27.101 [Pipeline] withCredentials 00:00:27.112 > git --version # timeout=10 00:00:27.125 > git --version # 'git version 2.39.2' 00:00:27.142 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:27.144 [Pipeline] { 00:00:27.154 [Pipeline] retry 00:00:27.157 [Pipeline] { 00:00:27.173 [Pipeline] sh 00:00:27.457 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:27.728 [Pipeline] } 00:00:27.746 [Pipeline] // retry 00:00:27.751 [Pipeline] } 00:00:27.767 [Pipeline] // withCredentials 00:00:27.777 [Pipeline] httpRequest 00:00:27.803 [Pipeline] echo 00:00:27.805 Sorcerer 10.211.164.101 is alive 00:00:27.814 [Pipeline] retry 00:00:27.816 [Pipeline] { 00:00:27.829 [Pipeline] httpRequest 00:00:27.834 HttpMethod: GET 00:00:27.835 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:27.835 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:27.843 Response Code: HTTP/1.1 200 OK 00:00:27.843 Success: Status code 200 is in the accepted range: 200,404 00:00:27.844 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:01.516 [Pipeline] } 00:01:01.534 [Pipeline] // retry 00:01:01.542 [Pipeline] sh 00:01:01.825 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.218 [Pipeline] sh 00:01:03.500 + git -C dpdk log --oneline -n5 00:01:03.500 caf0f5d395 version: 22.11.4 00:01:03.500 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:03.500 dc9c799c7d vhost: fix missing spinlock unlock 00:01:03.500 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:03.500 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:03.517 [Pipeline] writeFile 00:01:03.532 [Pipeline] sh 00:01:03.815 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:03.827 [Pipeline] sh 00:01:04.109 + cat autorun-spdk.conf 00:01:04.109 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.109 SPDK_RUN_ASAN=1 00:01:04.109 SPDK_RUN_UBSAN=1 00:01:04.109 SPDK_TEST_RAID=1 00:01:04.109 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:04.109 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.109 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.116 RUN_NIGHTLY=1 00:01:04.118 [Pipeline] } 00:01:04.131 [Pipeline] // stage 00:01:04.147 [Pipeline] stage 00:01:04.149 [Pipeline] { (Run VM) 00:01:04.164 [Pipeline] sh 00:01:04.446 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:04.446 + echo 'Start stage prepare_nvme.sh' 00:01:04.446 Start stage prepare_nvme.sh 00:01:04.446 + [[ -n 0 ]] 00:01:04.446 + disk_prefix=ex0 00:01:04.446 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:04.446 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:04.446 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:04.446 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.446 ++ SPDK_RUN_ASAN=1 00:01:04.446 ++ SPDK_RUN_UBSAN=1 00:01:04.446 ++ SPDK_TEST_RAID=1 00:01:04.446 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:04.446 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.446 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.446 ++ RUN_NIGHTLY=1 00:01:04.446 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:04.446 + nvme_files=() 00:01:04.446 + declare -A nvme_files 00:01:04.446 + backend_dir=/var/lib/libvirt/images/backends 00:01:04.446 + nvme_files['nvme.img']=5G 00:01:04.446 + nvme_files['nvme-cmb.img']=5G 00:01:04.446 + nvme_files['nvme-multi0.img']=4G 00:01:04.446 + nvme_files['nvme-multi1.img']=4G 00:01:04.446 + nvme_files['nvme-multi2.img']=4G 00:01:04.446 + nvme_files['nvme-openstack.img']=8G 00:01:04.446 + nvme_files['nvme-zns.img']=5G 00:01:04.446 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:04.446 + (( SPDK_TEST_FTL == 1 )) 00:01:04.447 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:04.447 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:04.447 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:04.447 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:04.447 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:04.447 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:04.447 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:04.447 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.447 + for nvme in "${!nvme_files[@]}" 00:01:04.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:04.706 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.706 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:04.706 + echo 'End stage prepare_nvme.sh' 00:01:04.706 End stage prepare_nvme.sh 00:01:04.717 [Pipeline] sh 00:01:05.000 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.000 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:05.000 00:01:05.000 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:05.000 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:05.000 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:05.000 HELP=0 00:01:05.000 DRY_RUN=0 00:01:05.000 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:05.000 NVME_DISKS_TYPE=nvme,nvme, 00:01:05.000 NVME_AUTO_CREATE=0 00:01:05.000 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:05.000 NVME_CMB=,, 00:01:05.000 NVME_PMR=,, 00:01:05.000 NVME_ZNS=,, 00:01:05.000 NVME_MS=,, 00:01:05.000 NVME_FDP=,, 00:01:05.000 SPDK_VAGRANT_DISTRO=fedora39 00:01:05.000 SPDK_VAGRANT_VMCPU=10 00:01:05.000 SPDK_VAGRANT_VMRAM=12288 00:01:05.000 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.000 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.000 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.000 SPDK_OPENSTACK_NETWORK=0 00:01:05.000 VAGRANT_PACKAGE_BOX=0 00:01:05.000 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:05.000 FORCE_DISTRO=true 00:01:05.000 VAGRANT_BOX_VERSION= 00:01:05.000 EXTRA_VAGRANTFILES= 00:01:05.000 NIC_MODEL=virtio 00:01:05.000 00:01:05.000 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:05.000 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:06.902 Bringing machine 'default' up with 'libvirt' provider... 00:01:07.470 ==> default: Creating image (snapshot of base box volume). 00:01:07.470 ==> default: Creating domain with the following settings... 00:01:07.470 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1723528569_d4ba9eea294f395e6847 00:01:07.470 ==> default: -- Domain type: kvm 00:01:07.470 ==> default: -- Cpus: 10 00:01:07.470 ==> default: -- Feature: acpi 00:01:07.470 ==> default: -- Feature: apic 00:01:07.470 ==> default: -- Feature: pae 00:01:07.470 ==> default: -- Memory: 12288M 00:01:07.470 ==> default: -- Memory Backing: hugepages: 00:01:07.470 ==> default: -- Management MAC: 00:01:07.470 ==> default: -- Loader: 00:01:07.470 ==> default: -- Nvram: 00:01:07.470 ==> default: -- Base box: spdk/fedora39 00:01:07.470 ==> default: -- Storage pool: default 00:01:07.470 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1723528569_d4ba9eea294f395e6847.img (20G) 00:01:07.470 ==> default: -- Volume Cache: default 00:01:07.470 ==> default: -- Kernel: 00:01:07.470 ==> default: -- Initrd: 00:01:07.470 ==> default: -- Graphics Type: vnc 00:01:07.470 ==> default: -- Graphics Port: -1 00:01:07.470 ==> default: -- Graphics IP: 127.0.0.1 00:01:07.470 ==> default: -- Graphics Password: Not defined 00:01:07.470 ==> default: -- Video Type: cirrus 00:01:07.470 ==> default: -- Video VRAM: 9216 00:01:07.470 ==> default: -- Sound Type: 00:01:07.470 ==> default: -- Keymap: en-us 00:01:07.470 ==> default: -- TPM Path: 00:01:07.470 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:07.470 ==> default: -- Command line args: 00:01:07.470 ==> default: -> value=-device, 00:01:07.470 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:07.470 ==> default: -> value=-drive, 00:01:07.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:07.470 ==> default: -> value=-device, 00:01:07.470 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.470 ==> default: -> value=-device, 00:01:07.470 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:07.470 ==> default: -> value=-drive, 00:01:07.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:07.470 ==> default: -> value=-device, 00:01:07.470 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.470 ==> default: -> value=-drive, 00:01:07.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:07.470 ==> default: -> value=-device, 00:01:07.470 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.470 ==> default: -> value=-drive, 00:01:07.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:07.470 ==> default: -> value=-device, 00:01:07.470 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.729 ==> default: Creating shared folders metadata... 00:01:07.729 ==> default: Starting domain. 00:01:09.637 ==> default: Waiting for domain to get an IP address... 00:01:27.732 ==> default: Waiting for SSH to become available... 00:01:27.732 ==> default: Configuring and enabling network interfaces... 00:01:33.014 default: SSH address: 192.168.121.3:22 00:01:33.014 default: SSH username: vagrant 00:01:33.014 default: SSH auth method: private key 00:01:34.921 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.046 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:48.325 ==> default: Mounting SSHFS shared folder... 00:01:50.862 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:50.862 ==> default: Checking Mount.. 00:01:52.241 ==> default: Folder Successfully Mounted! 00:01:52.241 ==> default: Running provisioner: file... 00:01:53.178 default: ~/.gitconfig => .gitconfig 00:01:53.747 00:01:53.747 SUCCESS! 00:01:53.747 00:01:53.747 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:53.747 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.747 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:53.747 00:01:53.757 [Pipeline] } 00:01:53.772 [Pipeline] // stage 00:01:53.782 [Pipeline] dir 00:01:53.782 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:53.784 [Pipeline] { 00:01:53.797 [Pipeline] catchError 00:01:53.798 [Pipeline] { 00:01:53.811 [Pipeline] sh 00:01:54.094 + vagrant ssh-config --host+ vagrant 00:01:54.094 sed -ne /^Host/,$p 00:01:54.094 + tee ssh_conf 00:01:56.634 Host vagrant 00:01:56.634 HostName 192.168.121.3 00:01:56.634 User vagrant 00:01:56.634 Port 22 00:01:56.634 UserKnownHostsFile /dev/null 00:01:56.634 StrictHostKeyChecking no 00:01:56.634 PasswordAuthentication no 00:01:56.634 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:56.634 IdentitiesOnly yes 00:01:56.634 LogLevel FATAL 00:01:56.634 ForwardAgent yes 00:01:56.634 ForwardX11 yes 00:01:56.634 00:01:56.649 [Pipeline] withEnv 00:01:56.651 [Pipeline] { 00:01:56.665 [Pipeline] sh 00:01:56.948 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:56.948 source /etc/os-release 00:01:56.948 [[ -e /image.version ]] && img=$(< /image.version) 00:01:56.948 # Minimal, systemd-like check. 00:01:56.948 if [[ -e /.dockerenv ]]; then 00:01:56.948 # Clear garbage from the node's name: 00:01:56.948 # agt-er_autotest_547-896 -> autotest_547-896 00:01:56.948 # $HOSTNAME is the actual container id 00:01:56.948 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:56.948 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:56.948 # We can assume this is a mount from a host where container is running, 00:01:56.948 # so fetch its hostname to easily identify the target swarm worker. 00:01:56.948 container="$(< /etc/hostname) ($agent)" 00:01:56.948 else 00:01:56.948 # Fallback 00:01:56.948 container=$agent 00:01:56.948 fi 00:01:56.948 fi 00:01:56.948 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:56.948 00:01:57.220 [Pipeline] } 00:01:57.236 [Pipeline] // withEnv 00:01:57.245 [Pipeline] setCustomBuildProperty 00:01:57.260 [Pipeline] stage 00:01:57.263 [Pipeline] { (Tests) 00:01:57.280 [Pipeline] sh 00:01:57.564 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:57.838 [Pipeline] sh 00:01:58.121 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:58.396 [Pipeline] timeout 00:01:58.397 Timeout set to expire in 1 hr 30 min 00:01:58.399 [Pipeline] { 00:01:58.413 [Pipeline] sh 00:01:58.696 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:59.263 HEAD is now at 7c739692e env_dpdk: restore opts_size after opts structure is zeroed 00:01:59.276 [Pipeline] sh 00:01:59.585 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:59.858 [Pipeline] sh 00:02:00.138 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:00.414 [Pipeline] sh 00:02:00.697 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:00.956 ++ readlink -f spdk_repo 00:02:00.956 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:00.956 + [[ -n /home/vagrant/spdk_repo ]] 00:02:00.956 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:00.956 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:00.956 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:00.956 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:00.956 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:00.956 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:00.956 + cd /home/vagrant/spdk_repo 00:02:00.956 + source /etc/os-release 00:02:00.956 ++ NAME='Fedora Linux' 00:02:00.956 ++ VERSION='39 (Cloud Edition)' 00:02:00.956 ++ ID=fedora 00:02:00.956 ++ VERSION_ID=39 00:02:00.956 ++ VERSION_CODENAME= 00:02:00.956 ++ PLATFORM_ID=platform:f39 00:02:00.956 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:00.956 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.956 ++ LOGO=fedora-logo-icon 00:02:00.956 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:00.956 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.957 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:00.957 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.957 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.957 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.957 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:00.957 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.957 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:00.957 ++ SUPPORT_END=2024-11-12 00:02:00.957 ++ VARIANT='Cloud Edition' 00:02:00.957 ++ VARIANT_ID=cloud 00:02:00.957 + uname -a 00:02:00.957 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:00.957 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:01.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:01.524 Hugepages 00:02:01.524 node hugesize free / total 00:02:01.524 node0 1048576kB 0 / 0 00:02:01.524 node0 2048kB 0 / 0 00:02:01.524 00:02:01.524 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:01.524 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:01.524 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:01.524 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:01.524 + rm -f /tmp/spdk-ld-path 00:02:01.524 + source autorun-spdk.conf 00:02:01.524 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.524 ++ SPDK_RUN_ASAN=1 00:02:01.524 ++ SPDK_RUN_UBSAN=1 00:02:01.525 ++ SPDK_TEST_RAID=1 00:02:01.525 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:01.525 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:01.525 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.525 ++ RUN_NIGHTLY=1 00:02:01.525 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:01.525 + [[ -n '' ]] 00:02:01.525 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:01.525 + for M in /var/spdk/build-*-manifest.txt 00:02:01.525 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:01.525 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:01.525 + for M in /var/spdk/build-*-manifest.txt 00:02:01.525 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:01.525 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:01.525 + for M in /var/spdk/build-*-manifest.txt 00:02:01.525 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:01.525 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:01.525 ++ uname 00:02:01.525 + [[ Linux == \L\i\n\u\x ]] 00:02:01.525 + sudo dmesg -T 00:02:01.784 + sudo dmesg --clear 00:02:01.784 + dmesg_pid=6155 00:02:01.784 + sudo dmesg -Tw 00:02:01.784 + [[ Fedora Linux == FreeBSD ]] 00:02:01.784 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.784 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.784 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:01.784 + [[ -x /usr/src/fio-static/fio ]] 00:02:01.784 + export FIO_BIN=/usr/src/fio-static/fio 00:02:01.784 + FIO_BIN=/usr/src/fio-static/fio 00:02:01.784 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:01.784 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:01.784 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:01.784 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.784 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.784 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:01.784 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.784 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.784 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:01.784 Test configuration: 00:02:01.784 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.784 SPDK_RUN_ASAN=1 00:02:01.784 SPDK_RUN_UBSAN=1 00:02:01.784 SPDK_TEST_RAID=1 00:02:01.784 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:01.784 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:01.784 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.784 RUN_NIGHTLY=1 05:57:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:01.784 05:57:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.784 05:57:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.784 05:57:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.784 05:57:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.784 05:57:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.784 05:57:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.784 05:57:03 -- paths/export.sh@5 -- $ export PATH 00:02:01.784 05:57:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.784 05:57:03 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:01.784 05:57:03 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:01.784 05:57:03 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723528623.XXXXXX 00:02:01.784 05:57:03 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723528623.VEJC2r 00:02:01.784 05:57:03 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:01.784 05:57:03 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:01.784 05:57:03 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:01.784 05:57:03 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:01.784 05:57:03 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:01.784 05:57:03 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.785 05:57:03 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:01.785 05:57:03 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:02:01.785 05:57:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.046 05:57:03 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:02.046 05:57:03 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:02.046 05:57:03 -- pm/common@17 -- $ local monitor 00:02:02.046 05:57:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.046 05:57:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.046 05:57:03 -- pm/common@25 -- $ sleep 1 00:02:02.046 05:57:03 -- pm/common@21 -- $ date +%s 00:02:02.046 05:57:03 -- pm/common@21 -- $ date +%s 00:02:02.046 05:57:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723528623 00:02:02.046 05:57:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723528623 00:02:02.046 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723528623_collect-cpu-load.pm.log 00:02:02.046 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723528623_collect-vmstat.pm.log 00:02:03.046 05:57:04 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:03.046 05:57:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.046 05:57:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.046 05:57:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.046 05:57:04 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.046 Tue Aug 13 05:57:04 AM UTC 2024 00:02:03.046 05:57:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.046 v24.09-pre-414-g7c739692e 00:02:03.046 05:57:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:03.046 05:57:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:03.046 05:57:04 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:03.046 05:57:04 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:03.046 05:57:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.046 ************************************ 00:02:03.046 START TEST asan 00:02:03.046 ************************************ 00:02:03.046 using asan 00:02:03.046 05:57:04 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:03.046 00:02:03.046 real 0m0.000s 00:02:03.046 user 0m0.000s 00:02:03.046 sys 0m0.000s 00:02:03.046 05:57:04 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:03.046 05:57:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.046 ************************************ 00:02:03.046 END TEST asan 00:02:03.046 ************************************ 00:02:03.046 05:57:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.046 05:57:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.046 05:57:04 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:03.046 05:57:04 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:03.046 05:57:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.046 ************************************ 00:02:03.046 START TEST ubsan 00:02:03.046 ************************************ 00:02:03.046 using ubsan 00:02:03.046 05:57:04 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:03.046 00:02:03.046 real 0m0.000s 00:02:03.046 user 0m0.000s 00:02:03.046 sys 0m0.000s 00:02:03.046 05:57:04 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:03.046 05:57:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.046 ************************************ 00:02:03.046 END TEST ubsan 00:02:03.046 ************************************ 00:02:03.046 05:57:04 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:03.046 05:57:04 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:03.046 05:57:04 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:03.046 05:57:04 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:03.046 05:57:04 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:03.046 05:57:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.046 ************************************ 00:02:03.046 START TEST build_native_dpdk 00:02:03.046 ************************************ 00:02:03.046 05:57:04 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:03.046 caf0f5d395 version: 22.11.4 00:02:03.046 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:03.046 dc9c799c7d vhost: fix missing spinlock unlock 00:02:03.046 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:03.046 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:03.046 patching file config/rte_config.h 00:02:03.046 Hunk #1 succeeded at 60 (offset 1 line). 00:02:03.046 05:57:04 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:03.046 05:57:04 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:03.306 05:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:03.306 05:57:04 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:03.306 patching file lib/pcapng/rte_pcapng.c 00:02:03.306 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:03.306 05:57:04 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:03.306 05:57:04 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:03.306 05:57:04 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:03.306 05:57:04 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:03.306 05:57:04 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:08.584 The Meson build system 00:02:08.584 Version: 1.5.0 00:02:08.584 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:08.584 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:08.584 Build type: native build 00:02:08.584 Program cat found: YES (/usr/bin/cat) 00:02:08.584 Project name: DPDK 00:02:08.584 Project version: 22.11.4 00:02:08.584 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.584 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:08.584 Host machine cpu family: x86_64 00:02:08.584 Host machine cpu: x86_64 00:02:08.584 Message: ## Building in Developer Mode ## 00:02:08.584 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.584 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:08.584 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.584 Program objdump found: YES (/usr/bin/objdump) 00:02:08.584 Program python3 found: YES (/usr/bin/python3) 00:02:08.584 Program cat found: YES (/usr/bin/cat) 00:02:08.584 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:08.584 Checking for size of "void *" : 8 00:02:08.584 Checking for size of "void *" : 8 (cached) 00:02:08.584 Library m found: YES 00:02:08.584 Library numa found: YES 00:02:08.584 Has header "numaif.h" : YES 00:02:08.584 Library fdt found: NO 00:02:08.584 Library execinfo found: NO 00:02:08.584 Has header "execinfo.h" : YES 00:02:08.584 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.584 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.584 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.584 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.584 Run-time dependency openssl found: YES 3.1.1 00:02:08.584 Run-time dependency libpcap found: YES 1.10.4 00:02:08.584 Has header "pcap.h" with dependency libpcap: YES 00:02:08.584 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.584 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.584 Compiler for C supports arguments -Wformat: YES 00:02:08.584 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.584 Compiler for C supports arguments -Wformat-security: NO 00:02:08.584 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.584 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.584 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.584 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.584 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.584 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.584 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.584 Compiler for C supports arguments -Wundef: YES 00:02:08.584 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.584 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.584 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.584 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.584 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.584 Compiler for C supports arguments -mavx512f: YES 00:02:08.584 Checking if "AVX512 checking" compiles: YES 00:02:08.584 Fetching value of define "__SSE4_2__" : 1 00:02:08.584 Fetching value of define "__AES__" : 1 00:02:08.584 Fetching value of define "__AVX__" : 1 00:02:08.584 Fetching value of define "__AVX2__" : 1 00:02:08.584 Fetching value of define "__AVX512BW__" : 1 00:02:08.584 Fetching value of define "__AVX512CD__" : 1 00:02:08.584 Fetching value of define "__AVX512DQ__" : 1 00:02:08.584 Fetching value of define "__AVX512F__" : 1 00:02:08.584 Fetching value of define "__AVX512VL__" : 1 00:02:08.584 Fetching value of define "__PCLMUL__" : 1 00:02:08.584 Fetching value of define "__RDRND__" : 1 00:02:08.584 Fetching value of define "__RDSEED__" : 1 00:02:08.584 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.584 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.584 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.584 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.584 Checking for function "getentropy" : YES 00:02:08.584 Message: lib/eal: Defining dependency "eal" 00:02:08.584 Message: lib/ring: Defining dependency "ring" 00:02:08.584 Message: lib/rcu: Defining dependency "rcu" 00:02:08.584 Message: lib/mempool: Defining dependency "mempool" 00:02:08.584 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.584 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.584 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.584 Compiler for C supports arguments -mpclmul: YES 00:02:08.584 Compiler for C supports arguments -maes: YES 00:02:08.584 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.584 Compiler for C supports arguments -mavx512bw: YES 00:02:08.584 Compiler for C supports arguments -mavx512dq: YES 00:02:08.584 Compiler for C supports arguments -mavx512vl: YES 00:02:08.584 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.584 Compiler for C supports arguments -mavx2: YES 00:02:08.584 Compiler for C supports arguments -mavx: YES 00:02:08.584 Message: lib/net: Defining dependency "net" 00:02:08.584 Message: lib/meter: Defining dependency "meter" 00:02:08.584 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.584 Message: lib/pci: Defining dependency "pci" 00:02:08.584 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.584 Message: lib/metrics: Defining dependency "metrics" 00:02:08.584 Message: lib/hash: Defining dependency "hash" 00:02:08.584 Message: lib/timer: Defining dependency "timer" 00:02:08.584 Fetching value of define "__AVX2__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:08.584 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.584 Message: lib/acl: Defining dependency "acl" 00:02:08.584 Message: lib/bbdev: Defining dependency "bbdev" 00:02:08.584 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:08.584 Run-time dependency libelf found: YES 0.191 00:02:08.584 Message: lib/bpf: Defining dependency "bpf" 00:02:08.584 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:08.584 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.584 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.584 Message: lib/distributor: Defining dependency "distributor" 00:02:08.584 Message: lib/efd: Defining dependency "efd" 00:02:08.584 Message: lib/eventdev: Defining dependency "eventdev" 00:02:08.584 Message: lib/gpudev: Defining dependency "gpudev" 00:02:08.585 Message: lib/gro: Defining dependency "gro" 00:02:08.585 Message: lib/gso: Defining dependency "gso" 00:02:08.585 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:08.585 Message: lib/jobstats: Defining dependency "jobstats" 00:02:08.585 Message: lib/latencystats: Defining dependency "latencystats" 00:02:08.585 Message: lib/lpm: Defining dependency "lpm" 00:02:08.585 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.585 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.585 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:08.585 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:08.585 Message: lib/member: Defining dependency "member" 00:02:08.585 Message: lib/pcapng: Defining dependency "pcapng" 00:02:08.585 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.585 Message: lib/power: Defining dependency "power" 00:02:08.585 Message: lib/rawdev: Defining dependency "rawdev" 00:02:08.585 Message: lib/regexdev: Defining dependency "regexdev" 00:02:08.585 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.585 Message: lib/rib: Defining dependency "rib" 00:02:08.585 Message: lib/reorder: Defining dependency "reorder" 00:02:08.585 Message: lib/sched: Defining dependency "sched" 00:02:08.585 Message: lib/security: Defining dependency "security" 00:02:08.585 Message: lib/stack: Defining dependency "stack" 00:02:08.585 Has header "linux/userfaultfd.h" : YES 00:02:08.585 Message: lib/vhost: Defining dependency "vhost" 00:02:08.585 Message: lib/ipsec: Defining dependency "ipsec" 00:02:08.585 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.585 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.585 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.585 Message: lib/fib: Defining dependency "fib" 00:02:08.585 Message: lib/port: Defining dependency "port" 00:02:08.585 Message: lib/pdump: Defining dependency "pdump" 00:02:08.585 Message: lib/table: Defining dependency "table" 00:02:08.585 Message: lib/pipeline: Defining dependency "pipeline" 00:02:08.585 Message: lib/graph: Defining dependency "graph" 00:02:08.585 Message: lib/node: Defining dependency "node" 00:02:08.585 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.585 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.585 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.585 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.585 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:08.585 Compiler for C supports arguments -Wno-unused-value: YES 00:02:08.585 Compiler for C supports arguments -Wno-format: YES 00:02:08.585 Compiler for C supports arguments -Wno-format-security: YES 00:02:08.585 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:08.585 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:09.964 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:09.964 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:09.964 Fetching value of define "__AVX2__" : 1 (cached) 00:02:09.964 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.964 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.964 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.964 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.964 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:09.964 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:09.964 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.964 Configuring doxy-api.conf using configuration 00:02:09.964 Program sphinx-build found: NO 00:02:09.964 Configuring rte_build_config.h using configuration 00:02:09.964 Message: 00:02:09.964 ================= 00:02:09.964 Applications Enabled 00:02:09.964 ================= 00:02:09.964 00:02:09.964 apps: 00:02:09.964 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:09.964 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:09.964 test-security-perf, 00:02:09.964 00:02:09.964 Message: 00:02:09.964 ================= 00:02:09.964 Libraries Enabled 00:02:09.964 ================= 00:02:09.964 00:02:09.964 libs: 00:02:09.964 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:09.964 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:09.964 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:09.964 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:09.964 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:09.964 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:09.964 table, pipeline, graph, node, 00:02:09.964 00:02:09.964 Message: 00:02:09.964 =============== 00:02:09.964 Drivers Enabled 00:02:09.964 =============== 00:02:09.964 00:02:09.964 common: 00:02:09.964 00:02:09.964 bus: 00:02:09.964 pci, vdev, 00:02:09.964 mempool: 00:02:09.964 ring, 00:02:09.964 dma: 00:02:09.964 00:02:09.964 net: 00:02:09.964 i40e, 00:02:09.964 raw: 00:02:09.964 00:02:09.964 crypto: 00:02:09.964 00:02:09.964 compress: 00:02:09.964 00:02:09.964 regex: 00:02:09.964 00:02:09.964 vdpa: 00:02:09.964 00:02:09.964 event: 00:02:09.964 00:02:09.964 baseband: 00:02:09.964 00:02:09.964 gpu: 00:02:09.964 00:02:09.964 00:02:09.964 Message: 00:02:09.964 ================= 00:02:09.964 Content Skipped 00:02:09.964 ================= 00:02:09.964 00:02:09.964 apps: 00:02:09.964 00:02:09.964 libs: 00:02:09.964 kni: explicitly disabled via build config (deprecated lib) 00:02:09.964 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:09.964 00:02:09.964 drivers: 00:02:09.964 common/cpt: not in enabled drivers build config 00:02:09.964 common/dpaax: not in enabled drivers build config 00:02:09.964 common/iavf: not in enabled drivers build config 00:02:09.964 common/idpf: not in enabled drivers build config 00:02:09.964 common/mvep: not in enabled drivers build config 00:02:09.964 common/octeontx: not in enabled drivers build config 00:02:09.964 bus/auxiliary: not in enabled drivers build config 00:02:09.964 bus/dpaa: not in enabled drivers build config 00:02:09.964 bus/fslmc: not in enabled drivers build config 00:02:09.964 bus/ifpga: not in enabled drivers build config 00:02:09.964 bus/vmbus: not in enabled drivers build config 00:02:09.964 common/cnxk: not in enabled drivers build config 00:02:09.964 common/mlx5: not in enabled drivers build config 00:02:09.964 common/qat: not in enabled drivers build config 00:02:09.964 common/sfc_efx: not in enabled drivers build config 00:02:09.964 mempool/bucket: not in enabled drivers build config 00:02:09.964 mempool/cnxk: not in enabled drivers build config 00:02:09.964 mempool/dpaa: not in enabled drivers build config 00:02:09.964 mempool/dpaa2: not in enabled drivers build config 00:02:09.964 mempool/octeontx: not in enabled drivers build config 00:02:09.964 mempool/stack: not in enabled drivers build config 00:02:09.964 dma/cnxk: not in enabled drivers build config 00:02:09.964 dma/dpaa: not in enabled drivers build config 00:02:09.964 dma/dpaa2: not in enabled drivers build config 00:02:09.964 dma/hisilicon: not in enabled drivers build config 00:02:09.964 dma/idxd: not in enabled drivers build config 00:02:09.964 dma/ioat: not in enabled drivers build config 00:02:09.964 dma/skeleton: not in enabled drivers build config 00:02:09.964 net/af_packet: not in enabled drivers build config 00:02:09.964 net/af_xdp: not in enabled drivers build config 00:02:09.964 net/ark: not in enabled drivers build config 00:02:09.964 net/atlantic: not in enabled drivers build config 00:02:09.964 net/avp: not in enabled drivers build config 00:02:09.964 net/axgbe: not in enabled drivers build config 00:02:09.964 net/bnx2x: not in enabled drivers build config 00:02:09.964 net/bnxt: not in enabled drivers build config 00:02:09.964 net/bonding: not in enabled drivers build config 00:02:09.964 net/cnxk: not in enabled drivers build config 00:02:09.964 net/cxgbe: not in enabled drivers build config 00:02:09.964 net/dpaa: not in enabled drivers build config 00:02:09.964 net/dpaa2: not in enabled drivers build config 00:02:09.964 net/e1000: not in enabled drivers build config 00:02:09.964 net/ena: not in enabled drivers build config 00:02:09.964 net/enetc: not in enabled drivers build config 00:02:09.964 net/enetfec: not in enabled drivers build config 00:02:09.964 net/enic: not in enabled drivers build config 00:02:09.964 net/failsafe: not in enabled drivers build config 00:02:09.964 net/fm10k: not in enabled drivers build config 00:02:09.964 net/gve: not in enabled drivers build config 00:02:09.964 net/hinic: not in enabled drivers build config 00:02:09.964 net/hns3: not in enabled drivers build config 00:02:09.964 net/iavf: not in enabled drivers build config 00:02:09.964 net/ice: not in enabled drivers build config 00:02:09.964 net/idpf: not in enabled drivers build config 00:02:09.964 net/igc: not in enabled drivers build config 00:02:09.964 net/ionic: not in enabled drivers build config 00:02:09.964 net/ipn3ke: not in enabled drivers build config 00:02:09.964 net/ixgbe: not in enabled drivers build config 00:02:09.964 net/kni: not in enabled drivers build config 00:02:09.964 net/liquidio: not in enabled drivers build config 00:02:09.964 net/mana: not in enabled drivers build config 00:02:09.964 net/memif: not in enabled drivers build config 00:02:09.964 net/mlx4: not in enabled drivers build config 00:02:09.964 net/mlx5: not in enabled drivers build config 00:02:09.964 net/mvneta: not in enabled drivers build config 00:02:09.964 net/mvpp2: not in enabled drivers build config 00:02:09.964 net/netvsc: not in enabled drivers build config 00:02:09.964 net/nfb: not in enabled drivers build config 00:02:09.964 net/nfp: not in enabled drivers build config 00:02:09.964 net/ngbe: not in enabled drivers build config 00:02:09.964 net/null: not in enabled drivers build config 00:02:09.964 net/octeontx: not in enabled drivers build config 00:02:09.964 net/octeon_ep: not in enabled drivers build config 00:02:09.964 net/pcap: not in enabled drivers build config 00:02:09.964 net/pfe: not in enabled drivers build config 00:02:09.964 net/qede: not in enabled drivers build config 00:02:09.964 net/ring: not in enabled drivers build config 00:02:09.964 net/sfc: not in enabled drivers build config 00:02:09.964 net/softnic: not in enabled drivers build config 00:02:09.964 net/tap: not in enabled drivers build config 00:02:09.964 net/thunderx: not in enabled drivers build config 00:02:09.964 net/txgbe: not in enabled drivers build config 00:02:09.964 net/vdev_netvsc: not in enabled drivers build config 00:02:09.964 net/vhost: not in enabled drivers build config 00:02:09.964 net/virtio: not in enabled drivers build config 00:02:09.964 net/vmxnet3: not in enabled drivers build config 00:02:09.964 raw/cnxk_bphy: not in enabled drivers build config 00:02:09.964 raw/cnxk_gpio: not in enabled drivers build config 00:02:09.964 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:09.964 raw/ifpga: not in enabled drivers build config 00:02:09.964 raw/ntb: not in enabled drivers build config 00:02:09.964 raw/skeleton: not in enabled drivers build config 00:02:09.964 crypto/armv8: not in enabled drivers build config 00:02:09.964 crypto/bcmfs: not in enabled drivers build config 00:02:09.964 crypto/caam_jr: not in enabled drivers build config 00:02:09.964 crypto/ccp: not in enabled drivers build config 00:02:09.964 crypto/cnxk: not in enabled drivers build config 00:02:09.964 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.964 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.964 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.964 crypto/mlx5: not in enabled drivers build config 00:02:09.964 crypto/mvsam: not in enabled drivers build config 00:02:09.964 crypto/nitrox: not in enabled drivers build config 00:02:09.964 crypto/null: not in enabled drivers build config 00:02:09.964 crypto/octeontx: not in enabled drivers build config 00:02:09.964 crypto/openssl: not in enabled drivers build config 00:02:09.964 crypto/scheduler: not in enabled drivers build config 00:02:09.964 crypto/uadk: not in enabled drivers build config 00:02:09.964 crypto/virtio: not in enabled drivers build config 00:02:09.964 compress/isal: not in enabled drivers build config 00:02:09.964 compress/mlx5: not in enabled drivers build config 00:02:09.964 compress/octeontx: not in enabled drivers build config 00:02:09.965 compress/zlib: not in enabled drivers build config 00:02:09.965 regex/mlx5: not in enabled drivers build config 00:02:09.965 regex/cn9k: not in enabled drivers build config 00:02:09.965 vdpa/ifc: not in enabled drivers build config 00:02:09.965 vdpa/mlx5: not in enabled drivers build config 00:02:09.965 vdpa/sfc: not in enabled drivers build config 00:02:09.965 event/cnxk: not in enabled drivers build config 00:02:09.965 event/dlb2: not in enabled drivers build config 00:02:09.965 event/dpaa: not in enabled drivers build config 00:02:09.965 event/dpaa2: not in enabled drivers build config 00:02:09.965 event/dsw: not in enabled drivers build config 00:02:09.965 event/opdl: not in enabled drivers build config 00:02:09.965 event/skeleton: not in enabled drivers build config 00:02:09.965 event/sw: not in enabled drivers build config 00:02:09.965 event/octeontx: not in enabled drivers build config 00:02:09.965 baseband/acc: not in enabled drivers build config 00:02:09.965 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:09.965 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:09.965 baseband/la12xx: not in enabled drivers build config 00:02:09.965 baseband/null: not in enabled drivers build config 00:02:09.965 baseband/turbo_sw: not in enabled drivers build config 00:02:09.965 gpu/cuda: not in enabled drivers build config 00:02:09.965 00:02:09.965 00:02:09.965 Build targets in project: 311 00:02:09.965 00:02:09.965 DPDK 22.11.4 00:02:09.965 00:02:09.965 User defined options 00:02:09.965 libdir : lib 00:02:09.965 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:09.965 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:09.965 c_link_args : 00:02:09.965 enable_docs : false 00:02:09.965 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.965 enable_kmods : false 00:02:09.965 machine : native 00:02:09.965 tests : false 00:02:09.965 00:02:09.965 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.965 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:10.222 05:57:11 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:10.223 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:10.223 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:10.223 [2/740] Generating lib/rte_telemetry_def with a custom command 00:02:10.223 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:10.223 [4/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.223 [5/740] Generating lib/rte_kvargs_def with a custom command 00:02:10.223 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.480 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.480 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.480 [9/740] Linking static target lib/librte_kvargs.a 00:02:10.480 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.480 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:10.480 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.480 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.480 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.480 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.480 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.480 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.480 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.480 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.480 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.480 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:10.480 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.738 [23/740] Linking target lib/librte_kvargs.so.23.0 00:02:10.738 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.738 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.738 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.738 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.738 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.738 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.738 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.738 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.738 [32/740] Linking static target lib/librte_telemetry.a 00:02:10.738 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.738 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.997 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.997 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.997 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.997 [38/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:10.997 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.997 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.997 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.997 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.997 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.997 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.257 [45/740] Linking target lib/librte_telemetry.so.23.0 00:02:11.257 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.257 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.257 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.257 [49/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:11.257 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.257 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.257 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.257 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.257 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.257 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.257 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.257 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.257 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.257 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.515 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.515 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.515 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.515 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.515 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.515 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.515 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:11.515 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.515 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.515 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.515 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.515 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.515 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.515 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.515 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.515 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.515 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.515 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.775 [78/740] Generating lib/rte_eal_def with a custom command 00:02:11.775 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.775 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:11.775 [81/740] Generating lib/rte_ring_def with a custom command 00:02:11.775 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:11.775 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:11.775 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:11.775 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.775 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.775 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.775 [88/740] Linking static target lib/librte_ring.a 00:02:11.775 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.775 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:12.033 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:12.033 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.033 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.033 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.033 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.033 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.033 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:12.292 [98/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:12.292 [99/740] Linking static target lib/librte_eal.a 00:02:12.292 [100/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.292 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.292 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.292 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.552 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.552 [105/740] Linking static target lib/librte_rcu.a 00:02:12.552 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.552 [107/740] Linking static target lib/librte_mempool.a 00:02:12.552 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.552 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.552 [110/740] Generating lib/rte_net_def with a custom command 00:02:12.810 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:12.810 [112/740] Generating lib/rte_net_mingw with a custom command 00:02:12.810 [113/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:12.810 [114/740] Generating lib/rte_meter_def with a custom command 00:02:12.810 [115/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.810 [116/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.810 [117/740] Generating lib/rte_meter_mingw with a custom command 00:02:12.810 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.810 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.810 [120/740] Linking static target lib/librte_meter.a 00:02:12.810 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.070 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.070 [123/740] Linking static target lib/librte_net.a 00:02:13.070 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.070 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.070 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.070 [127/740] Linking static target lib/librte_mbuf.a 00:02:13.070 [128/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.331 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.331 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.331 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.331 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.331 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.590 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.590 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.590 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.849 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:13.849 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.849 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:13.849 [140/740] Generating lib/rte_pci_def with a custom command 00:02:13.849 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:13.849 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.849 [143/740] Linking static target lib/librte_pci.a 00:02:13.849 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.849 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.849 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.849 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.108 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.108 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.108 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.108 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.108 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.108 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.108 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.108 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.108 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.108 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:14.108 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:14.108 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.367 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:14.367 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:02:14.367 [162/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.367 [163/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.367 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.367 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:14.367 [166/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.367 [167/740] Generating lib/rte_hash_def with a custom command 00:02:14.367 [168/740] Linking static target lib/librte_cmdline.a 00:02:14.367 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:14.367 [170/740] Generating lib/rte_timer_def with a custom command 00:02:14.367 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.367 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:14.626 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.626 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:14.626 [175/740] Linking static target lib/librte_metrics.a 00:02:14.885 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.885 [177/740] Linking static target lib/librte_timer.a 00:02:14.885 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.145 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:15.145 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.145 [181/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.145 [182/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.145 [183/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:15.145 [184/740] Generating lib/rte_acl_def with a custom command 00:02:15.145 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:15.404 [186/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:15.405 [187/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:15.405 [188/740] Generating lib/rte_bbdev_def with a custom command 00:02:15.405 [189/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.405 [190/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:15.405 [191/740] Linking static target lib/librte_ethdev.a 00:02:15.405 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:15.405 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:15.974 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:15.974 [195/740] Linking static target lib/librte_bitratestats.a 00:02:15.974 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:15.974 [197/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:15.974 [198/740] Linking static target lib/librte_bbdev.a 00:02:15.974 [199/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:15.974 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.234 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:16.494 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:16.494 [203/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.494 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:16.754 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:16.754 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.754 [207/740] Linking static target lib/librte_hash.a 00:02:16.754 [208/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:17.013 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:17.013 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:17.013 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:17.273 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:17.273 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:17.273 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:17.273 [215/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:17.273 [216/740] Linking static target lib/librte_cfgfile.a 00:02:17.273 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:17.273 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:17.532 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.532 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:17.532 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:17.532 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:17.532 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:17.791 [224/740] Linking static target lib/librte_bpf.a 00:02:17.791 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.791 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.791 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:17.791 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:17.791 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.050 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.050 [231/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.050 [232/740] Linking static target lib/librte_compressdev.a 00:02:18.050 [233/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.050 [234/740] Generating lib/rte_distributor_def with a custom command 00:02:18.050 [235/740] Generating lib/rte_distributor_mingw with a custom command 00:02:18.050 [236/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:18.050 [237/740] Linking static target lib/librte_acl.a 00:02:18.050 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.050 [239/740] Generating lib/rte_efd_def with a custom command 00:02:18.310 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:18.310 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.310 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:18.310 [243/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.310 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:18.569 [245/740] Linking target lib/librte_eal.so.23.0 00:02:18.569 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:18.569 [247/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:18.569 [248/740] Linking target lib/librte_ring.so.23.0 00:02:18.569 [249/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:18.828 [250/740] Linking target lib/librte_meter.so.23.0 00:02:18.828 [251/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.828 [252/740] Linking target lib/librte_pci.so.23.0 00:02:18.828 [253/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:18.828 [254/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:18.828 [255/740] Linking target lib/librte_rcu.so.23.0 00:02:18.828 [256/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:18.829 [257/740] Linking target lib/librte_mempool.so.23.0 00:02:18.829 [258/740] Linking target lib/librte_timer.so.23.0 00:02:18.829 [259/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:18.829 [260/740] Linking target lib/librte_acl.so.23.0 00:02:19.087 [261/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:19.087 [262/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:19.087 [263/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:19.087 [264/740] Linking target lib/librte_cfgfile.so.23.0 00:02:19.087 [265/740] Linking static target lib/librte_distributor.a 00:02:19.087 [266/740] Linking target lib/librte_mbuf.so.23.0 00:02:19.087 [267/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:19.087 [268/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:19.087 [269/740] Linking target lib/librte_net.so.23.0 00:02:19.344 [270/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.344 [271/740] Linking target lib/librte_bbdev.so.23.0 00:02:19.344 [272/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:19.344 [273/740] Linking target lib/librte_compressdev.so.23.0 00:02:19.344 [274/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:19.344 [275/740] Linking target lib/librte_cmdline.so.23.0 00:02:19.344 [276/740] Linking static target lib/librte_efd.a 00:02:19.344 [277/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:19.344 [278/740] Linking target lib/librte_distributor.so.23.0 00:02:19.344 [279/740] Linking target lib/librte_hash.so.23.0 00:02:19.344 [280/740] Generating lib/rte_eventdev_def with a custom command 00:02:19.344 [281/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:19.603 [282/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:19.603 [283/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:19.603 [284/740] Generating lib/rte_gpudev_def with a custom command 00:02:19.603 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:19.603 [286/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.603 [287/740] Linking target lib/librte_efd.so.23.0 00:02:19.862 [288/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.862 [289/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:19.862 [290/740] Linking target lib/librte_ethdev.so.23.0 00:02:19.862 [291/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.862 [292/740] Linking static target lib/librte_cryptodev.a 00:02:19.862 [293/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:20.121 [294/740] Linking target lib/librte_metrics.so.23.0 00:02:20.121 [295/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:20.121 [296/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:20.121 [297/740] Linking target lib/librte_bitratestats.so.23.0 00:02:20.121 [298/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:20.121 [299/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:20.121 [300/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:20.121 [301/740] Generating lib/rte_gro_def with a custom command 00:02:20.121 [302/740] Linking target lib/librte_bpf.so.23.0 00:02:20.121 [303/740] Linking static target lib/librte_gpudev.a 00:02:20.121 [304/740] Generating lib/rte_gro_mingw with a custom command 00:02:20.380 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:20.380 [306/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:20.380 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.639 [308/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:20.639 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.639 [310/740] Generating lib/rte_gso_def with a custom command 00:02:20.639 [311/740] Generating lib/rte_gso_mingw with a custom command 00:02:20.899 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.899 [313/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.899 [314/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.899 [315/740] Linking static target lib/librte_gro.a 00:02:20.899 [316/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.899 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.899 [318/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:20.899 [319/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.899 [320/740] Linking static target lib/librte_eventdev.a 00:02:20.899 [321/740] Linking target lib/librte_gpudev.so.23.0 00:02:20.899 [322/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.158 [323/740] Linking target lib/librte_gro.so.23.0 00:02:21.158 [324/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:21.158 [325/740] Linking static target lib/librte_gso.a 00:02:21.158 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:02:21.158 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:21.158 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:21.158 [329/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.158 [330/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:21.158 [331/740] Linking static target lib/librte_jobstats.a 00:02:21.158 [332/740] Generating lib/rte_jobstats_def with a custom command 00:02:21.158 [333/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:21.158 [334/740] Linking target lib/librte_gso.so.23.0 00:02:21.158 [335/740] Generating lib/rte_latencystats_def with a custom command 00:02:21.417 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:21.417 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:21.417 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:21.417 [339/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:21.417 [340/740] Generating lib/rte_lpm_def with a custom command 00:02:21.417 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:21.417 [342/740] Generating lib/rte_lpm_mingw with a custom command 00:02:21.417 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.674 [344/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:21.674 [345/740] Linking static target lib/librte_ip_frag.a 00:02:21.674 [346/740] Linking target lib/librte_jobstats.so.23.0 00:02:21.674 [347/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.674 [348/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:21.674 [349/740] Linking target lib/librte_cryptodev.so.23.0 00:02:21.674 [350/740] Linking static target lib/librte_latencystats.a 00:02:21.934 [351/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.934 [352/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:21.934 [353/740] Linking target lib/librte_ip_frag.so.23.0 00:02:21.934 [354/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:21.934 [355/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:21.934 [356/740] Generating lib/rte_member_def with a custom command 00:02:21.934 [357/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:21.934 [358/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:21.934 [359/740] Generating lib/rte_member_mingw with a custom command 00:02:21.934 [360/740] Generating lib/rte_pcapng_def with a custom command 00:02:21.934 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:21.934 [362/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.934 [363/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:21.934 [364/740] Linking target lib/librte_latencystats.so.23.0 00:02:22.192 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.192 [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.192 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.192 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.192 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:22.192 [370/740] Linking static target lib/librte_lpm.a 00:02:22.450 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:22.450 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:22.450 [373/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:22.450 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.450 [375/740] Generating lib/rte_power_def with a custom command 00:02:22.450 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:22.450 [377/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.709 [378/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.709 [379/740] Generating lib/rte_rawdev_def with a custom command 00:02:22.709 [380/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.709 [381/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:22.709 [382/740] Linking target lib/librte_eventdev.so.23.0 00:02:22.709 [383/740] Generating lib/rte_regexdev_def with a custom command 00:02:22.709 [384/740] Linking target lib/librte_lpm.so.23.0 00:02:22.709 [385/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:22.709 [386/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:22.709 [387/740] Linking static target lib/librte_pcapng.a 00:02:22.709 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.709 [389/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:22.709 [390/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:22.709 [391/740] Generating lib/rte_dmadev_def with a custom command 00:02:22.709 [392/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:22.709 [393/740] Generating lib/rte_rib_def with a custom command 00:02:22.709 [394/740] Generating lib/rte_rib_mingw with a custom command 00:02:22.968 [395/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:22.969 [396/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:22.969 [397/740] Linking static target lib/librte_rawdev.a 00:02:22.969 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:22.969 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:22.969 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.969 [401/740] Linking target lib/librte_pcapng.so.23.0 00:02:22.969 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.969 [403/740] Linking static target lib/librte_dmadev.a 00:02:22.969 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.969 [405/740] Linking static target lib/librte_power.a 00:02:22.969 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:23.228 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:23.228 [408/740] Linking static target lib/librte_regexdev.a 00:02:23.228 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:23.228 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.228 [411/740] Linking target lib/librte_rawdev.so.23.0 00:02:23.228 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:23.228 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:23.228 [414/740] Generating lib/rte_sched_def with a custom command 00:02:23.488 [415/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:23.488 [416/740] Generating lib/rte_sched_mingw with a custom command 00:02:23.488 [417/740] Generating lib/rte_security_def with a custom command 00:02:23.488 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:23.488 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:23.488 [420/740] Linking static target lib/librte_reorder.a 00:02:23.488 [421/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.488 [422/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:23.488 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:23.488 [424/740] Linking static target lib/librte_member.a 00:02:23.488 [425/740] Linking target lib/librte_dmadev.so.23.0 00:02:23.488 [426/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:23.488 [427/740] Generating lib/rte_stack_def with a custom command 00:02:23.488 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:23.488 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:23.488 [430/740] Linking static target lib/librte_stack.a 00:02:23.488 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:23.488 [432/740] Linking static target lib/librte_rib.a 00:02:23.748 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:23.748 [434/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.748 [435/740] Linking target lib/librte_reorder.so.23.0 00:02:23.748 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:23.748 [437/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.748 [438/740] Linking target lib/librte_regexdev.so.23.0 00:02:23.748 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.748 [440/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.748 [441/740] Linking target lib/librte_stack.so.23.0 00:02:23.748 [442/740] Linking target lib/librte_member.so.23.0 00:02:23.748 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.007 [444/740] Linking target lib/librte_power.so.23.0 00:02:24.007 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.007 [446/740] Linking static target lib/librte_security.a 00:02:24.007 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.007 [448/740] Linking target lib/librte_rib.so.23.0 00:02:24.267 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:24.267 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.267 [451/740] Generating lib/rte_vhost_def with a custom command 00:02:24.267 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:24.267 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.267 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.267 [455/740] Linking target lib/librte_security.so.23.0 00:02:24.525 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.525 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:24.525 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:24.525 [459/740] Linking static target lib/librte_sched.a 00:02:24.785 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:24.785 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:24.785 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:24.785 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:24.785 [464/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.044 [465/740] Linking target lib/librte_sched.so.23.0 00:02:25.044 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.044 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.044 [468/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:25.044 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.303 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:25.303 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:25.303 [472/740] Generating lib/rte_fib_def with a custom command 00:02:25.303 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:25.303 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.562 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:25.822 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:25.822 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:25.822 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:25.822 [479/740] Linking static target lib/librte_ipsec.a 00:02:25.822 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.080 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:26.080 [482/740] Linking static target lib/librte_fib.a 00:02:26.080 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.080 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:26.080 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.080 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:26.349 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.349 [488/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.349 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.349 [490/740] Linking target lib/librte_fib.so.23.0 00:02:26.349 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.933 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.933 [493/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.933 [494/740] Generating lib/rte_port_def with a custom command 00:02:26.933 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:26.933 [496/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.933 [497/740] Generating lib/rte_pdump_def with a custom command 00:02:26.933 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:02:26.933 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.933 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:27.192 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:27.192 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:27.192 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:27.192 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:27.192 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.452 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:27.452 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:27.711 [508/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:27.711 [509/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:27.711 [510/740] Linking static target lib/librte_port.a 00:02:27.711 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:27.711 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:27.970 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.970 [514/740] Linking static target lib/librte_pdump.a 00:02:27.970 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.229 [516/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.229 [517/740] Linking target lib/librte_port.so.23.0 00:02:28.229 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:28.229 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:28.229 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:28.229 [521/740] Generating lib/rte_table_def with a custom command 00:02:28.229 [522/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:28.229 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:28.229 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:28.488 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:28.488 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:28.488 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:28.748 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:28.748 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:28.748 [530/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:28.748 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:28.748 [532/740] Linking static target lib/librte_table.a 00:02:28.748 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:29.006 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:29.006 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:29.268 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:29.268 [537/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.268 [538/740] Linking target lib/librte_table.so.23.0 00:02:29.268 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:29.526 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:29.526 [541/740] Generating lib/rte_graph_def with a custom command 00:02:29.526 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:29.526 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:29.789 [544/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.789 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:29.789 [546/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:29.789 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:29.789 [548/740] Linking static target lib/librte_graph.a 00:02:29.789 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:30.077 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:30.077 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:30.077 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:30.336 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:30.336 [554/740] Generating lib/rte_node_def with a custom command 00:02:30.336 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:30.336 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.595 [557/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:30.595 [558/740] Linking target lib/librte_graph.so.23.0 00:02:30.595 [559/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:30.595 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.595 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:30.595 [562/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:30.595 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.595 [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.595 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:30.595 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:30.854 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.854 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:30.854 [569/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:30.854 [570/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:30.854 [571/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.854 [572/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:30.854 [573/740] Linking static target lib/librte_node.a 00:02:30.854 [574/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:30.854 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:30.854 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.854 [577/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.854 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.113 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.113 [580/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.113 [581/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.113 [582/740] Linking target lib/librte_node.so.23.0 00:02:31.113 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.113 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.113 [585/740] Linking static target drivers/librte_bus_vdev.a 00:02:31.113 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.372 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.372 [588/740] Linking static target drivers/librte_bus_pci.a 00:02:31.372 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.372 [590/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.372 [591/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.372 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:31.372 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:31.632 [594/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:31.632 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.632 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:31.632 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:31.632 [598/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:31.632 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:31.892 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:31.892 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:31.892 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:31.892 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.892 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.892 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:31.892 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.892 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:32.149 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:32.408 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:32.668 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:32.668 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:32.927 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:33.187 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:33.187 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:33.446 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:33.446 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:33.706 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:33.706 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:33.706 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:33.706 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:33.965 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:34.225 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:34.484 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:34.747 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:35.009 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:35.009 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:35.009 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:35.009 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:35.269 [629/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:35.269 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:35.269 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:35.269 [632/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:35.269 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:35.839 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:35.839 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:35.839 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:35.839 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:35.839 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:36.099 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:36.099 [640/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:36.099 [641/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:36.099 [642/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:36.099 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:36.099 [644/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:36.099 [645/740] Linking static target drivers/librte_net_i40e.a 00:02:36.358 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:36.358 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:36.619 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:36.619 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:36.619 [650/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.619 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:36.878 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:36.878 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:37.138 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:37.138 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:37.138 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:37.138 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:37.138 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:37.138 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:37.397 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:37.397 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:37.397 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:37.397 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:37.657 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:37.657 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:37.916 [666/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.916 [667/740] Linking static target lib/librte_vhost.a 00:02:37.916 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:38.176 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:38.435 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:38.435 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:38.435 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:38.694 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:38.694 [674/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:38.694 [675/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.694 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:38.694 [677/740] Linking target lib/librte_vhost.so.23.0 00:02:38.953 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:38.953 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:38.953 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:38.953 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:39.214 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:39.214 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:39.214 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:39.214 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:39.475 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:39.475 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:39.475 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:39.475 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:39.475 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:39.734 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:39.734 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:39.992 [693/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:39.992 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:40.251 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:40.251 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:40.510 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:40.510 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:40.510 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:40.770 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:40.770 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:41.029 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:41.289 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:41.290 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:41.290 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:41.290 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:41.290 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:41.549 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:41.808 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:42.070 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:42.070 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:42.389 [712/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:42.389 [713/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:42.389 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:42.389 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:42.389 [716/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:42.389 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:42.960 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:42.960 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:43.529 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:43.529 [721/740] Linking static target lib/librte_pipeline.a 00:02:44.099 [722/740] Linking target app/dpdk-test-crypto-perf 00:02:44.099 [723/740] Linking target app/dpdk-test-cmdline 00:02:44.099 [724/740] Linking target app/dpdk-dumpcap 00:02:44.099 [725/740] Linking target app/dpdk-test-acl 00:02:44.099 [726/740] Linking target app/dpdk-test-compress-perf 00:02:44.099 [727/740] Linking target app/dpdk-proc-info 00:02:44.099 [728/740] Linking target app/dpdk-pdump 00:02:44.099 [729/740] Linking target app/dpdk-test-bbdev 00:02:44.099 [730/740] Linking target app/dpdk-test-eventdev 00:02:44.358 [731/740] Linking target app/dpdk-test-pipeline 00:02:44.358 [732/740] Linking target app/dpdk-test-fib 00:02:44.358 [733/740] Linking target app/dpdk-test-flow-perf 00:02:44.358 [734/740] Linking target app/dpdk-test-gpudev 00:02:44.358 [735/740] Linking target app/dpdk-test-regex 00:02:44.358 [736/740] Linking target app/dpdk-test-sad 00:02:44.358 [737/740] Linking target app/dpdk-testpmd 00:02:44.618 [738/740] Linking target app/dpdk-test-security-perf 00:02:49.903 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.903 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:49.903 05:57:50 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:49.903 05:57:50 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:49.903 05:57:50 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:49.903 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:49.903 [0/1] Installing files. 00:02:49.903 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.903 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.904 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.905 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:49.906 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.907 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.907 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.907 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.908 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.908 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.908 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.908 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.908 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.909 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.910 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:49.911 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:49.911 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:49.911 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:49.911 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:49.911 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:49.911 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:49.911 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:49.911 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:49.911 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:49.911 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:49.911 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:49.911 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:49.911 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:49.911 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:49.911 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:49.911 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:49.911 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:49.911 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:49.911 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:49.911 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:49.911 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:49.911 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:49.911 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:49.911 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:49.911 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:49.911 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:49.911 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:49.911 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:49.911 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:49.911 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:49.911 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:49.911 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:49.911 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:49.911 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:49.911 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:49.911 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:49.911 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:49.911 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:49.911 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:49.911 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:49.911 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:49.911 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:49.911 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:49.911 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:49.911 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:49.911 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:49.911 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:49.911 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:49.911 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:49.911 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:49.911 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:49.911 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:49.911 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:49.911 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:49.911 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:49.911 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:49.911 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:49.911 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:49.911 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:49.911 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:49.911 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:49.911 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:49.911 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:49.911 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:49.911 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:49.912 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:49.912 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:49.912 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:49.912 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:49.912 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:49.912 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:49.912 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:49.912 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:49.912 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:49.912 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:49.912 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:49.912 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:49.912 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:49.912 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:49.912 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:49.912 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:49.912 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:49.912 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:49.912 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:49.912 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:49.912 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:49.912 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:49.912 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:49.912 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:49.912 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:49.912 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:49.912 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:49.912 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:49.912 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:49.912 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:49.912 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:49.912 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:49.912 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:49.912 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:49.912 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:49.912 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:49.912 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:49.912 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:49.912 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:49.912 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:49.912 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:49.912 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:49.912 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:49.912 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:49.912 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:49.912 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:49.912 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:49.912 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:49.912 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:49.912 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:49.912 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:49.912 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:49.912 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:49.912 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:49.912 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:49.912 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:49.912 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:49.912 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:49.912 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:49.912 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:49.912 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:49.912 05:57:51 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:49.912 05:57:51 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:49.912 00:02:49.912 real 0m46.868s 00:02:49.912 user 4m43.949s 00:02:49.912 sys 0m50.386s 00:02:49.912 05:57:51 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:49.912 05:57:51 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:49.912 ************************************ 00:02:49.912 END TEST build_native_dpdk 00:02:49.912 ************************************ 00:02:49.912 05:57:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:49.912 05:57:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:49.912 05:57:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:49.912 05:57:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:49.912 05:57:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:49.912 05:57:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:49.912 05:57:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:49.912 05:57:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:50.171 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:50.430 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.430 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:50.430 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:50.690 Using 'verbs' RDMA provider 00:03:06.523 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:21.417 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:21.677 Creating mk/config.mk...done. 00:03:21.677 Creating mk/cc.flags.mk...done. 00:03:21.677 Type 'make' to build. 00:03:21.677 05:58:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:21.677 05:58:23 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:21.677 05:58:23 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:21.677 05:58:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.677 ************************************ 00:03:21.677 START TEST make 00:03:21.677 ************************************ 00:03:21.677 05:58:23 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:22.243 make[1]: Nothing to be done for 'all'. 00:03:44.173 CC lib/log/log_flags.o 00:03:44.173 CC lib/log/log.o 00:03:44.173 CC lib/log/log_deprecated.o 00:03:44.173 CC lib/ut/ut.o 00:03:44.173 CC lib/ut_mock/mock.o 00:03:44.173 LIB libspdk_log.a 00:03:44.173 LIB libspdk_ut.a 00:03:44.173 LIB libspdk_ut_mock.a 00:03:44.173 SO libspdk_log.so.7.0 00:03:44.173 SO libspdk_ut.so.2.0 00:03:44.173 SO libspdk_ut_mock.so.6.0 00:03:44.173 SYMLINK libspdk_log.so 00:03:44.173 SYMLINK libspdk_ut.so 00:03:44.173 SYMLINK libspdk_ut_mock.so 00:03:44.173 CC lib/dma/dma.o 00:03:44.173 CXX lib/trace_parser/trace.o 00:03:44.173 CC lib/ioat/ioat.o 00:03:44.173 CC lib/util/base64.o 00:03:44.173 CC lib/util/crc16.o 00:03:44.173 CC lib/util/crc32.o 00:03:44.173 CC lib/util/bit_array.o 00:03:44.173 CC lib/util/cpuset.o 00:03:44.173 CC lib/util/crc32c.o 00:03:44.173 CC lib/vfio_user/host/vfio_user_pci.o 00:03:44.173 CC lib/util/crc32_ieee.o 00:03:44.173 CC lib/util/crc64.o 00:03:44.173 CC lib/util/dif.o 00:03:44.173 LIB libspdk_dma.a 00:03:44.173 CC lib/util/fd.o 00:03:44.173 SO libspdk_dma.so.4.0 00:03:44.173 CC lib/vfio_user/host/vfio_user.o 00:03:44.173 SYMLINK libspdk_dma.so 00:03:44.173 CC lib/util/fd_group.o 00:03:44.173 CC lib/util/file.o 00:03:44.173 CC lib/util/hexlify.o 00:03:44.173 CC lib/util/iov.o 00:03:44.173 LIB libspdk_ioat.a 00:03:44.173 SO libspdk_ioat.so.7.0 00:03:44.173 CC lib/util/math.o 00:03:44.173 CC lib/util/net.o 00:03:44.173 SYMLINK libspdk_ioat.so 00:03:44.173 CC lib/util/pipe.o 00:03:44.173 CC lib/util/strerror_tls.o 00:03:44.173 CC lib/util/string.o 00:03:44.173 LIB libspdk_vfio_user.a 00:03:44.173 SO libspdk_vfio_user.so.5.0 00:03:44.173 CC lib/util/uuid.o 00:03:44.173 CC lib/util/xor.o 00:03:44.173 CC lib/util/zipf.o 00:03:44.173 SYMLINK libspdk_vfio_user.so 00:03:44.173 LIB libspdk_util.a 00:03:44.173 SO libspdk_util.so.10.0 00:03:44.173 LIB libspdk_trace_parser.a 00:03:44.173 SO libspdk_trace_parser.so.5.0 00:03:44.173 SYMLINK libspdk_util.so 00:03:44.173 SYMLINK libspdk_trace_parser.so 00:03:44.173 CC lib/vmd/vmd.o 00:03:44.173 CC lib/vmd/led.o 00:03:44.173 CC lib/json/json_parse.o 00:03:44.173 CC lib/json/json_util.o 00:03:44.173 CC lib/json/json_write.o 00:03:44.173 CC lib/conf/conf.o 00:03:44.173 CC lib/idxd/idxd.o 00:03:44.173 CC lib/rdma_provider/common.o 00:03:44.173 CC lib/env_dpdk/env.o 00:03:44.173 CC lib/rdma_utils/rdma_utils.o 00:03:44.173 CC lib/idxd/idxd_user.o 00:03:44.173 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:44.173 LIB libspdk_conf.a 00:03:44.173 CC lib/idxd/idxd_kernel.o 00:03:44.173 SO libspdk_conf.so.6.0 00:03:44.173 CC lib/env_dpdk/memory.o 00:03:44.173 LIB libspdk_json.a 00:03:44.173 SYMLINK libspdk_conf.so 00:03:44.173 CC lib/env_dpdk/pci.o 00:03:44.173 LIB libspdk_rdma_utils.a 00:03:44.173 SO libspdk_rdma_utils.so.1.0 00:03:44.173 SO libspdk_json.so.6.0 00:03:44.174 SYMLINK libspdk_rdma_utils.so 00:03:44.174 LIB libspdk_rdma_provider.a 00:03:44.174 CC lib/env_dpdk/init.o 00:03:44.174 CC lib/env_dpdk/threads.o 00:03:44.174 CC lib/env_dpdk/pci_ioat.o 00:03:44.174 SYMLINK libspdk_json.so 00:03:44.174 CC lib/env_dpdk/pci_virtio.o 00:03:44.174 SO libspdk_rdma_provider.so.6.0 00:03:44.174 SYMLINK libspdk_rdma_provider.so 00:03:44.174 CC lib/env_dpdk/pci_vmd.o 00:03:44.174 CC lib/env_dpdk/pci_idxd.o 00:03:44.174 CC lib/env_dpdk/pci_event.o 00:03:44.174 CC lib/jsonrpc/jsonrpc_server.o 00:03:44.174 CC lib/env_dpdk/sigbus_handler.o 00:03:44.433 LIB libspdk_idxd.a 00:03:44.433 CC lib/env_dpdk/pci_dpdk.o 00:03:44.433 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:44.433 SO libspdk_idxd.so.12.0 00:03:44.433 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:44.433 CC lib/jsonrpc/jsonrpc_client.o 00:03:44.433 SYMLINK libspdk_idxd.so 00:03:44.433 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:44.433 LIB libspdk_vmd.a 00:03:44.433 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.433 SO libspdk_vmd.so.6.0 00:03:44.433 SYMLINK libspdk_vmd.so 00:03:44.693 LIB libspdk_jsonrpc.a 00:03:44.693 SO libspdk_jsonrpc.so.6.0 00:03:44.693 SYMLINK libspdk_jsonrpc.so 00:03:45.260 CC lib/rpc/rpc.o 00:03:45.260 LIB libspdk_env_dpdk.a 00:03:45.519 SO libspdk_env_dpdk.so.15.0 00:03:45.519 LIB libspdk_rpc.a 00:03:45.519 SO libspdk_rpc.so.6.0 00:03:45.519 SYMLINK libspdk_env_dpdk.so 00:03:45.519 SYMLINK libspdk_rpc.so 00:03:46.087 CC lib/trace/trace.o 00:03:46.087 CC lib/trace/trace_flags.o 00:03:46.087 CC lib/keyring/keyring.o 00:03:46.087 CC lib/trace/trace_rpc.o 00:03:46.087 CC lib/keyring/keyring_rpc.o 00:03:46.087 CC lib/notify/notify.o 00:03:46.087 CC lib/notify/notify_rpc.o 00:03:46.087 LIB libspdk_notify.a 00:03:46.087 SO libspdk_notify.so.6.0 00:03:46.087 LIB libspdk_keyring.a 00:03:46.087 SYMLINK libspdk_notify.so 00:03:46.346 LIB libspdk_trace.a 00:03:46.346 SO libspdk_keyring.so.1.0 00:03:46.346 SO libspdk_trace.so.10.0 00:03:46.346 SYMLINK libspdk_keyring.so 00:03:46.346 SYMLINK libspdk_trace.so 00:03:46.914 CC lib/thread/thread.o 00:03:46.914 CC lib/thread/iobuf.o 00:03:46.914 CC lib/sock/sock.o 00:03:46.914 CC lib/sock/sock_rpc.o 00:03:47.174 LIB libspdk_sock.a 00:03:47.174 SO libspdk_sock.so.10.0 00:03:47.433 SYMLINK libspdk_sock.so 00:03:47.692 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:47.692 CC lib/nvme/nvme_ctrlr.o 00:03:47.692 CC lib/nvme/nvme_fabric.o 00:03:47.692 CC lib/nvme/nvme_ns_cmd.o 00:03:47.692 CC lib/nvme/nvme_ns.o 00:03:47.692 CC lib/nvme/nvme_pcie_common.o 00:03:47.692 CC lib/nvme/nvme_pcie.o 00:03:47.692 CC lib/nvme/nvme_qpair.o 00:03:47.692 CC lib/nvme/nvme.o 00:03:48.630 LIB libspdk_thread.a 00:03:48.630 CC lib/nvme/nvme_quirks.o 00:03:48.630 SO libspdk_thread.so.10.1 00:03:48.630 CC lib/nvme/nvme_transport.o 00:03:48.630 CC lib/nvme/nvme_discovery.o 00:03:48.630 SYMLINK libspdk_thread.so 00:03:48.630 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:48.630 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:48.630 CC lib/nvme/nvme_tcp.o 00:03:48.630 CC lib/accel/accel.o 00:03:48.630 CC lib/nvme/nvme_opal.o 00:03:48.889 CC lib/nvme/nvme_io_msg.o 00:03:48.890 CC lib/nvme/nvme_poll_group.o 00:03:49.148 CC lib/nvme/nvme_zns.o 00:03:49.148 CC lib/accel/accel_rpc.o 00:03:49.148 CC lib/accel/accel_sw.o 00:03:49.148 CC lib/nvme/nvme_stubs.o 00:03:49.407 CC lib/nvme/nvme_auth.o 00:03:49.407 CC lib/blob/blobstore.o 00:03:49.407 CC lib/blob/request.o 00:03:49.407 CC lib/blob/zeroes.o 00:03:49.407 CC lib/nvme/nvme_cuse.o 00:03:49.665 CC lib/blob/blob_bs_dev.o 00:03:49.925 LIB libspdk_accel.a 00:03:49.925 CC lib/init/json_config.o 00:03:49.925 CC lib/virtio/virtio.o 00:03:49.925 CC lib/virtio/virtio_vhost_user.o 00:03:49.925 SO libspdk_accel.so.16.0 00:03:49.925 SYMLINK libspdk_accel.so 00:03:49.925 CC lib/virtio/virtio_vfio_user.o 00:03:49.925 CC lib/virtio/virtio_pci.o 00:03:50.184 CC lib/init/subsystem.o 00:03:50.184 CC lib/nvme/nvme_rdma.o 00:03:50.184 CC lib/init/subsystem_rpc.o 00:03:50.184 CC lib/init/rpc.o 00:03:50.184 CC lib/fsdev/fsdev.o 00:03:50.184 CC lib/fsdev/fsdev_io.o 00:03:50.184 CC lib/fsdev/fsdev_rpc.o 00:03:50.443 LIB libspdk_virtio.a 00:03:50.443 CC lib/bdev/bdev.o 00:03:50.443 CC lib/bdev/bdev_rpc.o 00:03:50.443 SO libspdk_virtio.so.7.0 00:03:50.443 LIB libspdk_init.a 00:03:50.443 SO libspdk_init.so.5.0 00:03:50.443 SYMLINK libspdk_virtio.so 00:03:50.443 CC lib/bdev/bdev_zone.o 00:03:50.443 CC lib/bdev/part.o 00:03:50.443 SYMLINK libspdk_init.so 00:03:50.443 CC lib/bdev/scsi_nvme.o 00:03:50.715 CC lib/event/app.o 00:03:50.715 CC lib/event/reactor.o 00:03:50.715 CC lib/event/log_rpc.o 00:03:50.715 CC lib/event/app_rpc.o 00:03:50.715 CC lib/event/scheduler_static.o 00:03:50.974 LIB libspdk_fsdev.a 00:03:50.974 SO libspdk_fsdev.so.1.0 00:03:51.232 LIB libspdk_event.a 00:03:51.232 SYMLINK libspdk_fsdev.so 00:03:51.232 SO libspdk_event.so.14.0 00:03:51.232 SYMLINK libspdk_event.so 00:03:51.491 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:51.491 LIB libspdk_nvme.a 00:03:51.750 SO libspdk_nvme.so.13.1 00:03:52.010 SYMLINK libspdk_nvme.so 00:03:52.268 LIB libspdk_fuse_dispatcher.a 00:03:52.268 SO libspdk_fuse_dispatcher.so.1.0 00:03:52.527 SYMLINK libspdk_fuse_dispatcher.so 00:03:53.096 LIB libspdk_blob.a 00:03:53.096 SO libspdk_blob.so.11.0 00:03:53.355 SYMLINK libspdk_blob.so 00:03:53.355 LIB libspdk_bdev.a 00:03:53.355 SO libspdk_bdev.so.16.0 00:03:53.355 SYMLINK libspdk_bdev.so 00:03:53.614 CC lib/blobfs/tree.o 00:03:53.614 CC lib/blobfs/blobfs.o 00:03:53.614 CC lib/lvol/lvol.o 00:03:53.614 CC lib/nvmf/ctrlr.o 00:03:53.614 CC lib/nvmf/ctrlr_discovery.o 00:03:53.614 CC lib/nvmf/ctrlr_bdev.o 00:03:53.614 CC lib/ftl/ftl_core.o 00:03:53.614 CC lib/ublk/ublk.o 00:03:53.614 CC lib/nbd/nbd.o 00:03:53.614 CC lib/scsi/dev.o 00:03:53.614 CC lib/nbd/nbd_rpc.o 00:03:53.876 CC lib/scsi/lun.o 00:03:53.876 CC lib/scsi/port.o 00:03:54.140 CC lib/ftl/ftl_init.o 00:03:54.140 LIB libspdk_nbd.a 00:03:54.140 CC lib/scsi/scsi.o 00:03:54.140 SO libspdk_nbd.so.7.0 00:03:54.140 CC lib/scsi/scsi_bdev.o 00:03:54.140 SYMLINK libspdk_nbd.so 00:03:54.140 CC lib/ublk/ublk_rpc.o 00:03:54.140 CC lib/scsi/scsi_pr.o 00:03:54.399 CC lib/ftl/ftl_layout.o 00:03:54.399 CC lib/ftl/ftl_debug.o 00:03:54.399 CC lib/scsi/scsi_rpc.o 00:03:54.399 LIB libspdk_ublk.a 00:03:54.399 SO libspdk_ublk.so.3.0 00:03:54.399 CC lib/nvmf/subsystem.o 00:03:54.399 SYMLINK libspdk_ublk.so 00:03:54.399 CC lib/ftl/ftl_io.o 00:03:54.399 LIB libspdk_blobfs.a 00:03:54.658 CC lib/ftl/ftl_sb.o 00:03:54.658 CC lib/scsi/task.o 00:03:54.658 SO libspdk_blobfs.so.10.0 00:03:54.658 CC lib/nvmf/nvmf.o 00:03:54.658 SYMLINK libspdk_blobfs.so 00:03:54.658 CC lib/ftl/ftl_l2p.o 00:03:54.658 CC lib/ftl/ftl_l2p_flat.o 00:03:54.658 LIB libspdk_lvol.a 00:03:54.658 SO libspdk_lvol.so.10.0 00:03:54.658 CC lib/ftl/ftl_nv_cache.o 00:03:54.658 SYMLINK libspdk_lvol.so 00:03:54.658 CC lib/ftl/ftl_band.o 00:03:54.658 CC lib/nvmf/nvmf_rpc.o 00:03:54.658 CC lib/ftl/ftl_band_ops.o 00:03:54.658 LIB libspdk_scsi.a 00:03:54.917 CC lib/ftl/ftl_writer.o 00:03:54.917 SO libspdk_scsi.so.9.0 00:03:54.917 CC lib/ftl/ftl_rq.o 00:03:54.917 SYMLINK libspdk_scsi.so 00:03:54.917 CC lib/nvmf/transport.o 00:03:55.176 CC lib/nvmf/tcp.o 00:03:55.176 CC lib/ftl/ftl_reloc.o 00:03:55.176 CC lib/ftl/ftl_l2p_cache.o 00:03:55.176 CC lib/iscsi/conn.o 00:03:55.434 CC lib/iscsi/init_grp.o 00:03:55.693 CC lib/iscsi/iscsi.o 00:03:55.693 CC lib/nvmf/stubs.o 00:03:55.693 CC lib/nvmf/mdns_server.o 00:03:55.693 CC lib/nvmf/rdma.o 00:03:55.693 CC lib/nvmf/auth.o 00:03:55.952 CC lib/iscsi/md5.o 00:03:55.952 CC lib/iscsi/param.o 00:03:55.952 CC lib/ftl/ftl_p2l.o 00:03:55.952 CC lib/iscsi/portal_grp.o 00:03:55.952 CC lib/iscsi/tgt_node.o 00:03:56.210 CC lib/iscsi/iscsi_subsystem.o 00:03:56.210 CC lib/iscsi/iscsi_rpc.o 00:03:56.210 CC lib/iscsi/task.o 00:03:56.469 CC lib/ftl/mngt/ftl_mngt.o 00:03:56.469 CC lib/vhost/vhost.o 00:03:56.469 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:56.469 CC lib/vhost/vhost_rpc.o 00:03:56.727 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:56.727 CC lib/vhost/vhost_scsi.o 00:03:56.727 CC lib/vhost/vhost_blk.o 00:03:56.727 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:56.727 CC lib/vhost/rte_vhost_user.o 00:03:56.985 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:56.985 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:56.985 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:57.244 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:57.244 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:57.244 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:57.244 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:57.244 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:57.244 LIB libspdk_iscsi.a 00:03:57.502 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:57.502 SO libspdk_iscsi.so.8.0 00:03:57.502 CC lib/ftl/utils/ftl_conf.o 00:03:57.502 CC lib/ftl/utils/ftl_md.o 00:03:57.502 CC lib/ftl/utils/ftl_mempool.o 00:03:57.502 SYMLINK libspdk_iscsi.so 00:03:57.502 CC lib/ftl/utils/ftl_bitmap.o 00:03:57.502 CC lib/ftl/utils/ftl_property.o 00:03:57.760 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.760 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.760 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.760 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.760 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:57.760 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:57.760 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:58.018 LIB libspdk_vhost.a 00:03:58.018 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:58.018 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:58.018 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:58.018 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:58.019 CC lib/ftl/base/ftl_base_dev.o 00:03:58.019 SO libspdk_vhost.so.8.0 00:03:58.019 CC lib/ftl/base/ftl_base_bdev.o 00:03:58.019 CC lib/ftl/ftl_trace.o 00:03:58.019 SYMLINK libspdk_vhost.so 00:03:58.277 LIB libspdk_ftl.a 00:03:58.536 LIB libspdk_nvmf.a 00:03:58.536 SO libspdk_ftl.so.9.0 00:03:58.536 SO libspdk_nvmf.so.19.0 00:03:58.796 SYMLINK libspdk_ftl.so 00:03:58.796 SYMLINK libspdk_nvmf.so 00:03:59.363 CC module/env_dpdk/env_dpdk_rpc.o 00:03:59.363 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:59.363 CC module/keyring/linux/keyring.o 00:03:59.363 CC module/accel/ioat/accel_ioat.o 00:03:59.363 CC module/accel/dsa/accel_dsa.o 00:03:59.363 CC module/sock/posix/posix.o 00:03:59.363 CC module/keyring/file/keyring.o 00:03:59.363 CC module/blob/bdev/blob_bdev.o 00:03:59.363 CC module/accel/error/accel_error.o 00:03:59.363 CC module/fsdev/aio/fsdev_aio.o 00:03:59.363 LIB libspdk_env_dpdk_rpc.a 00:03:59.363 SO libspdk_env_dpdk_rpc.so.6.0 00:03:59.363 SYMLINK libspdk_env_dpdk_rpc.so 00:03:59.363 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:59.363 CC module/keyring/linux/keyring_rpc.o 00:03:59.363 CC module/keyring/file/keyring_rpc.o 00:03:59.363 CC module/accel/ioat/accel_ioat_rpc.o 00:03:59.363 CC module/accel/error/accel_error_rpc.o 00:03:59.363 LIB libspdk_scheduler_dynamic.a 00:03:59.622 SO libspdk_scheduler_dynamic.so.4.0 00:03:59.622 LIB libspdk_keyring_linux.a 00:03:59.622 SYMLINK libspdk_scheduler_dynamic.so 00:03:59.622 LIB libspdk_blob_bdev.a 00:03:59.622 LIB libspdk_keyring_file.a 00:03:59.622 SO libspdk_keyring_linux.so.1.0 00:03:59.622 CC module/accel/dsa/accel_dsa_rpc.o 00:03:59.622 SO libspdk_blob_bdev.so.11.0 00:03:59.622 SO libspdk_keyring_file.so.1.0 00:03:59.622 LIB libspdk_accel_ioat.a 00:03:59.622 LIB libspdk_accel_error.a 00:03:59.622 SO libspdk_accel_ioat.so.6.0 00:03:59.622 SO libspdk_accel_error.so.2.0 00:03:59.622 SYMLINK libspdk_keyring_linux.so 00:03:59.622 SYMLINK libspdk_blob_bdev.so 00:03:59.622 SYMLINK libspdk_keyring_file.so 00:03:59.622 CC module/fsdev/aio/linux_aio_mgr.o 00:03:59.622 SYMLINK libspdk_accel_ioat.so 00:03:59.622 SYMLINK libspdk_accel_error.so 00:03:59.622 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:59.622 LIB libspdk_accel_dsa.a 00:03:59.622 CC module/scheduler/gscheduler/gscheduler.o 00:03:59.622 SO libspdk_accel_dsa.so.5.0 00:03:59.880 CC module/accel/iaa/accel_iaa.o 00:03:59.880 SYMLINK libspdk_accel_dsa.so 00:03:59.880 LIB libspdk_scheduler_dpdk_governor.a 00:03:59.880 LIB libspdk_scheduler_gscheduler.a 00:03:59.880 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:59.880 CC module/bdev/error/vbdev_error.o 00:03:59.880 CC module/bdev/delay/vbdev_delay.o 00:03:59.880 SO libspdk_scheduler_gscheduler.so.4.0 00:03:59.880 CC module/blobfs/bdev/blobfs_bdev.o 00:03:59.880 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:59.880 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.880 SYMLINK libspdk_scheduler_gscheduler.so 00:03:59.880 CC module/bdev/gpt/gpt.o 00:04:00.140 CC module/bdev/lvol/vbdev_lvol.o 00:04:00.140 CC module/accel/iaa/accel_iaa_rpc.o 00:04:00.140 LIB libspdk_fsdev_aio.a 00:04:00.140 SO libspdk_fsdev_aio.so.1.0 00:04:00.140 CC module/bdev/malloc/bdev_malloc.o 00:04:00.140 LIB libspdk_sock_posix.a 00:04:00.140 LIB libspdk_accel_iaa.a 00:04:00.140 LIB libspdk_blobfs_bdev.a 00:04:00.140 CC module/bdev/gpt/vbdev_gpt.o 00:04:00.140 SYMLINK libspdk_fsdev_aio.so 00:04:00.140 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:00.140 CC module/bdev/error/vbdev_error_rpc.o 00:04:00.140 SO libspdk_sock_posix.so.6.0 00:04:00.140 SO libspdk_accel_iaa.so.3.0 00:04:00.140 SO libspdk_blobfs_bdev.so.6.0 00:04:00.402 SYMLINK libspdk_accel_iaa.so 00:04:00.402 SYMLINK libspdk_sock_posix.so 00:04:00.402 SYMLINK libspdk_blobfs_bdev.so 00:04:00.402 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:00.402 CC module/bdev/null/bdev_null.o 00:04:00.402 LIB libspdk_bdev_error.a 00:04:00.402 LIB libspdk_bdev_delay.a 00:04:00.402 SO libspdk_bdev_error.so.6.0 00:04:00.402 SO libspdk_bdev_delay.so.6.0 00:04:00.403 CC module/bdev/raid/bdev_raid.o 00:04:00.403 CC module/bdev/nvme/bdev_nvme.o 00:04:00.403 CC module/bdev/passthru/vbdev_passthru.o 00:04:00.403 SYMLINK libspdk_bdev_delay.so 00:04:00.403 SYMLINK libspdk_bdev_error.so 00:04:00.403 LIB libspdk_bdev_gpt.a 00:04:00.403 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:00.662 SO libspdk_bdev_gpt.so.6.0 00:04:00.662 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:00.662 SYMLINK libspdk_bdev_gpt.so 00:04:00.662 CC module/bdev/null/bdev_null_rpc.o 00:04:00.662 CC module/bdev/split/vbdev_split.o 00:04:00.662 LIB libspdk_bdev_lvol.a 00:04:00.662 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:00.662 LIB libspdk_bdev_malloc.a 00:04:00.662 SO libspdk_bdev_lvol.so.6.0 00:04:00.921 CC module/bdev/aio/bdev_aio.o 00:04:00.921 SO libspdk_bdev_malloc.so.6.0 00:04:00.921 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:00.921 LIB libspdk_bdev_null.a 00:04:00.921 SYMLINK libspdk_bdev_lvol.so 00:04:00.921 CC module/bdev/split/vbdev_split_rpc.o 00:04:00.921 SO libspdk_bdev_null.so.6.0 00:04:00.921 SYMLINK libspdk_bdev_malloc.so 00:04:00.921 CC module/bdev/nvme/nvme_rpc.o 00:04:00.921 SYMLINK libspdk_bdev_null.so 00:04:00.921 LIB libspdk_bdev_passthru.a 00:04:00.921 SO libspdk_bdev_passthru.so.6.0 00:04:00.921 LIB libspdk_bdev_split.a 00:04:01.179 CC module/bdev/ftl/bdev_ftl.o 00:04:01.179 SO libspdk_bdev_split.so.6.0 00:04:01.179 SYMLINK libspdk_bdev_passthru.so 00:04:01.180 CC module/bdev/aio/bdev_aio_rpc.o 00:04:01.180 CC module/bdev/iscsi/bdev_iscsi.o 00:04:01.180 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:01.180 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:01.180 SYMLINK libspdk_bdev_split.so 00:04:01.180 CC module/bdev/nvme/bdev_mdns_client.o 00:04:01.180 CC module/bdev/raid/bdev_raid_rpc.o 00:04:01.180 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:01.439 LIB libspdk_bdev_aio.a 00:04:01.439 LIB libspdk_bdev_zone_block.a 00:04:01.439 CC module/bdev/nvme/vbdev_opal.o 00:04:01.439 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:01.439 SO libspdk_bdev_aio.so.6.0 00:04:01.439 SO libspdk_bdev_zone_block.so.6.0 00:04:01.439 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:01.439 SYMLINK libspdk_bdev_zone_block.so 00:04:01.439 SYMLINK libspdk_bdev_aio.so 00:04:01.439 CC module/bdev/raid/bdev_raid_sb.o 00:04:01.439 CC module/bdev/raid/raid0.o 00:04:01.439 LIB libspdk_bdev_ftl.a 00:04:01.439 SO libspdk_bdev_ftl.so.6.0 00:04:01.439 LIB libspdk_bdev_iscsi.a 00:04:01.698 SO libspdk_bdev_iscsi.so.6.0 00:04:01.698 CC module/bdev/raid/raid1.o 00:04:01.698 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:01.698 SYMLINK libspdk_bdev_ftl.so 00:04:01.698 CC module/bdev/raid/concat.o 00:04:01.698 CC module/bdev/raid/raid5f.o 00:04:01.698 SYMLINK libspdk_bdev_iscsi.so 00:04:01.698 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:01.698 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:02.267 LIB libspdk_bdev_raid.a 00:04:02.267 LIB libspdk_bdev_virtio.a 00:04:02.267 SO libspdk_bdev_virtio.so.6.0 00:04:02.267 SO libspdk_bdev_raid.so.6.0 00:04:02.267 SYMLINK libspdk_bdev_virtio.so 00:04:02.267 SYMLINK libspdk_bdev_raid.so 00:04:03.205 LIB libspdk_bdev_nvme.a 00:04:03.205 SO libspdk_bdev_nvme.so.7.0 00:04:03.205 SYMLINK libspdk_bdev_nvme.so 00:04:03.774 CC module/event/subsystems/vmd/vmd.o 00:04:03.774 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:03.774 CC module/event/subsystems/iobuf/iobuf.o 00:04:03.774 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:03.774 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:03.774 CC module/event/subsystems/fsdev/fsdev.o 00:04:03.774 CC module/event/subsystems/scheduler/scheduler.o 00:04:03.774 CC module/event/subsystems/keyring/keyring.o 00:04:03.774 CC module/event/subsystems/sock/sock.o 00:04:03.774 LIB libspdk_event_vhost_blk.a 00:04:03.774 LIB libspdk_event_keyring.a 00:04:03.774 LIB libspdk_event_fsdev.a 00:04:03.774 LIB libspdk_event_scheduler.a 00:04:03.774 LIB libspdk_event_iobuf.a 00:04:03.774 LIB libspdk_event_vmd.a 00:04:03.774 SO libspdk_event_vhost_blk.so.3.0 00:04:04.033 SO libspdk_event_keyring.so.1.0 00:04:04.033 SO libspdk_event_fsdev.so.1.0 00:04:04.033 LIB libspdk_event_sock.a 00:04:04.033 SO libspdk_event_scheduler.so.4.0 00:04:04.033 SO libspdk_event_iobuf.so.3.0 00:04:04.033 SO libspdk_event_sock.so.5.0 00:04:04.033 SO libspdk_event_vmd.so.6.0 00:04:04.033 SYMLINK libspdk_event_vhost_blk.so 00:04:04.033 SYMLINK libspdk_event_keyring.so 00:04:04.034 SYMLINK libspdk_event_fsdev.so 00:04:04.034 SYMLINK libspdk_event_scheduler.so 00:04:04.034 SYMLINK libspdk_event_iobuf.so 00:04:04.034 SYMLINK libspdk_event_sock.so 00:04:04.034 SYMLINK libspdk_event_vmd.so 00:04:04.293 CC module/event/subsystems/accel/accel.o 00:04:04.553 LIB libspdk_event_accel.a 00:04:04.553 SO libspdk_event_accel.so.6.0 00:04:04.813 SYMLINK libspdk_event_accel.so 00:04:05.072 CC module/event/subsystems/bdev/bdev.o 00:04:05.331 LIB libspdk_event_bdev.a 00:04:05.331 SO libspdk_event_bdev.so.6.0 00:04:05.331 SYMLINK libspdk_event_bdev.so 00:04:05.591 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:05.591 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:05.591 CC module/event/subsystems/ublk/ublk.o 00:04:05.591 CC module/event/subsystems/scsi/scsi.o 00:04:05.591 CC module/event/subsystems/nbd/nbd.o 00:04:05.850 LIB libspdk_event_ublk.a 00:04:05.850 LIB libspdk_event_nbd.a 00:04:05.850 LIB libspdk_event_scsi.a 00:04:05.850 SO libspdk_event_nbd.so.6.0 00:04:05.850 SO libspdk_event_ublk.so.3.0 00:04:05.850 SO libspdk_event_scsi.so.6.0 00:04:05.850 SYMLINK libspdk_event_nbd.so 00:04:05.850 LIB libspdk_event_nvmf.a 00:04:05.850 SYMLINK libspdk_event_scsi.so 00:04:06.109 SYMLINK libspdk_event_ublk.so 00:04:06.109 SO libspdk_event_nvmf.so.6.0 00:04:06.109 SYMLINK libspdk_event_nvmf.so 00:04:06.367 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:06.367 CC module/event/subsystems/iscsi/iscsi.o 00:04:06.367 LIB libspdk_event_vhost_scsi.a 00:04:06.367 LIB libspdk_event_iscsi.a 00:04:06.367 SO libspdk_event_vhost_scsi.so.3.0 00:04:06.627 SO libspdk_event_iscsi.so.6.0 00:04:06.627 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.627 SYMLINK libspdk_event_iscsi.so 00:04:06.887 SO libspdk.so.6.0 00:04:06.887 SYMLINK libspdk.so 00:04:07.146 CC app/trace_record/trace_record.o 00:04:07.146 TEST_HEADER include/spdk/accel.h 00:04:07.146 TEST_HEADER include/spdk/accel_module.h 00:04:07.146 TEST_HEADER include/spdk/assert.h 00:04:07.146 CXX app/trace/trace.o 00:04:07.146 TEST_HEADER include/spdk/barrier.h 00:04:07.146 CC test/rpc_client/rpc_client_test.o 00:04:07.146 TEST_HEADER include/spdk/base64.h 00:04:07.146 TEST_HEADER include/spdk/bdev.h 00:04:07.146 TEST_HEADER include/spdk/bdev_module.h 00:04:07.146 TEST_HEADER include/spdk/bdev_zone.h 00:04:07.146 TEST_HEADER include/spdk/bit_array.h 00:04:07.146 TEST_HEADER include/spdk/bit_pool.h 00:04:07.146 TEST_HEADER include/spdk/blob_bdev.h 00:04:07.146 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:07.146 TEST_HEADER include/spdk/blobfs.h 00:04:07.146 TEST_HEADER include/spdk/blob.h 00:04:07.146 TEST_HEADER include/spdk/conf.h 00:04:07.146 TEST_HEADER include/spdk/config.h 00:04:07.146 TEST_HEADER include/spdk/cpuset.h 00:04:07.146 TEST_HEADER include/spdk/crc16.h 00:04:07.146 TEST_HEADER include/spdk/crc32.h 00:04:07.146 TEST_HEADER include/spdk/crc64.h 00:04:07.146 TEST_HEADER include/spdk/dif.h 00:04:07.146 TEST_HEADER include/spdk/dma.h 00:04:07.146 TEST_HEADER include/spdk/endian.h 00:04:07.146 TEST_HEADER include/spdk/env_dpdk.h 00:04:07.146 TEST_HEADER include/spdk/env.h 00:04:07.146 TEST_HEADER include/spdk/event.h 00:04:07.146 CC app/nvmf_tgt/nvmf_main.o 00:04:07.146 TEST_HEADER include/spdk/fd_group.h 00:04:07.146 TEST_HEADER include/spdk/fd.h 00:04:07.146 TEST_HEADER include/spdk/file.h 00:04:07.146 TEST_HEADER include/spdk/fsdev.h 00:04:07.146 TEST_HEADER include/spdk/fsdev_module.h 00:04:07.146 TEST_HEADER include/spdk/ftl.h 00:04:07.146 CC test/thread/poller_perf/poller_perf.o 00:04:07.146 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:07.146 TEST_HEADER include/spdk/gpt_spec.h 00:04:07.146 TEST_HEADER include/spdk/hexlify.h 00:04:07.146 TEST_HEADER include/spdk/histogram_data.h 00:04:07.146 TEST_HEADER include/spdk/idxd.h 00:04:07.146 TEST_HEADER include/spdk/idxd_spec.h 00:04:07.146 CC examples/util/zipf/zipf.o 00:04:07.146 TEST_HEADER include/spdk/init.h 00:04:07.146 TEST_HEADER include/spdk/ioat.h 00:04:07.146 TEST_HEADER include/spdk/ioat_spec.h 00:04:07.146 TEST_HEADER include/spdk/iscsi_spec.h 00:04:07.146 TEST_HEADER include/spdk/json.h 00:04:07.146 TEST_HEADER include/spdk/jsonrpc.h 00:04:07.146 TEST_HEADER include/spdk/keyring.h 00:04:07.146 TEST_HEADER include/spdk/keyring_module.h 00:04:07.146 TEST_HEADER include/spdk/likely.h 00:04:07.146 TEST_HEADER include/spdk/log.h 00:04:07.146 TEST_HEADER include/spdk/lvol.h 00:04:07.146 TEST_HEADER include/spdk/memory.h 00:04:07.146 TEST_HEADER include/spdk/mmio.h 00:04:07.146 CC test/dma/test_dma/test_dma.o 00:04:07.146 TEST_HEADER include/spdk/nbd.h 00:04:07.146 TEST_HEADER include/spdk/net.h 00:04:07.146 TEST_HEADER include/spdk/notify.h 00:04:07.146 TEST_HEADER include/spdk/nvme.h 00:04:07.146 TEST_HEADER include/spdk/nvme_intel.h 00:04:07.146 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:07.146 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:07.146 TEST_HEADER include/spdk/nvme_spec.h 00:04:07.146 TEST_HEADER include/spdk/nvme_zns.h 00:04:07.146 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:07.146 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:07.146 TEST_HEADER include/spdk/nvmf.h 00:04:07.146 TEST_HEADER include/spdk/nvmf_spec.h 00:04:07.146 TEST_HEADER include/spdk/nvmf_transport.h 00:04:07.146 TEST_HEADER include/spdk/opal.h 00:04:07.146 TEST_HEADER include/spdk/opal_spec.h 00:04:07.146 TEST_HEADER include/spdk/pci_ids.h 00:04:07.146 TEST_HEADER include/spdk/pipe.h 00:04:07.146 CC test/env/mem_callbacks/mem_callbacks.o 00:04:07.146 TEST_HEADER include/spdk/queue.h 00:04:07.146 TEST_HEADER include/spdk/reduce.h 00:04:07.146 TEST_HEADER include/spdk/rpc.h 00:04:07.405 CC test/app/bdev_svc/bdev_svc.o 00:04:07.405 TEST_HEADER include/spdk/scheduler.h 00:04:07.405 TEST_HEADER include/spdk/scsi.h 00:04:07.405 TEST_HEADER include/spdk/scsi_spec.h 00:04:07.405 TEST_HEADER include/spdk/sock.h 00:04:07.405 TEST_HEADER include/spdk/stdinc.h 00:04:07.405 TEST_HEADER include/spdk/string.h 00:04:07.405 TEST_HEADER include/spdk/thread.h 00:04:07.405 TEST_HEADER include/spdk/trace.h 00:04:07.405 TEST_HEADER include/spdk/trace_parser.h 00:04:07.405 TEST_HEADER include/spdk/tree.h 00:04:07.405 TEST_HEADER include/spdk/ublk.h 00:04:07.405 TEST_HEADER include/spdk/util.h 00:04:07.405 TEST_HEADER include/spdk/uuid.h 00:04:07.405 TEST_HEADER include/spdk/version.h 00:04:07.405 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:07.405 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:07.405 TEST_HEADER include/spdk/vhost.h 00:04:07.405 TEST_HEADER include/spdk/vmd.h 00:04:07.405 TEST_HEADER include/spdk/xor.h 00:04:07.405 TEST_HEADER include/spdk/zipf.h 00:04:07.405 CXX test/cpp_headers/accel.o 00:04:07.405 LINK zipf 00:04:07.405 LINK rpc_client_test 00:04:07.405 LINK poller_perf 00:04:07.405 LINK spdk_trace_record 00:04:07.405 LINK nvmf_tgt 00:04:07.405 LINK mem_callbacks 00:04:07.405 LINK bdev_svc 00:04:07.405 CXX test/cpp_headers/accel_module.o 00:04:07.663 LINK spdk_trace 00:04:07.663 CXX test/cpp_headers/assert.o 00:04:07.663 CXX test/cpp_headers/barrier.o 00:04:07.663 LINK test_dma 00:04:07.663 CC test/env/vtophys/vtophys.o 00:04:07.663 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.663 CC examples/ioat/perf/perf.o 00:04:07.663 CC examples/ioat/verify/verify.o 00:04:07.663 CXX test/cpp_headers/base64.o 00:04:07.921 CC test/app/histogram_perf/histogram_perf.o 00:04:07.921 LINK vtophys 00:04:07.921 CC examples/vmd/lsvmd/lsvmd.o 00:04:07.921 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.921 CXX test/cpp_headers/bdev.o 00:04:07.921 CC app/spdk_tgt/spdk_tgt.o 00:04:07.921 LINK iscsi_tgt 00:04:07.921 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:07.921 LINK ioat_perf 00:04:07.921 LINK verify 00:04:07.921 LINK lsvmd 00:04:08.178 LINK histogram_perf 00:04:08.179 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:08.179 CXX test/cpp_headers/bdev_module.o 00:04:08.179 LINK spdk_tgt 00:04:08.179 CXX test/cpp_headers/bdev_zone.o 00:04:08.179 CXX test/cpp_headers/bit_array.o 00:04:08.179 CC app/spdk_lspci/spdk_lspci.o 00:04:08.179 CXX test/cpp_headers/bit_pool.o 00:04:08.179 CC examples/vmd/led/led.o 00:04:08.179 LINK env_dpdk_post_init 00:04:08.437 LINK spdk_lspci 00:04:08.437 LINK nvme_fuzz 00:04:08.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:08.437 CXX test/cpp_headers/blob_bdev.o 00:04:08.437 CC app/spdk_nvme_perf/perf.o 00:04:08.437 LINK led 00:04:08.437 CC app/spdk_nvme_identify/identify.o 00:04:08.437 CC examples/idxd/perf/perf.o 00:04:08.437 CC test/env/memory/memory_ut.o 00:04:08.695 CXX test/cpp_headers/blobfs_bdev.o 00:04:08.695 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:08.695 CXX test/cpp_headers/blobfs.o 00:04:08.695 CXX test/cpp_headers/blob.o 00:04:08.695 CC test/event/event_perf/event_perf.o 00:04:08.695 CXX test/cpp_headers/conf.o 00:04:08.952 CC test/event/reactor_perf/reactor_perf.o 00:04:08.952 CC test/event/reactor/reactor.o 00:04:08.952 LINK event_perf 00:04:08.952 CXX test/cpp_headers/config.o 00:04:08.952 LINK idxd_perf 00:04:08.952 LINK reactor_perf 00:04:08.952 LINK reactor 00:04:08.952 CXX test/cpp_headers/cpuset.o 00:04:08.952 LINK vhost_fuzz 00:04:09.211 CXX test/cpp_headers/crc16.o 00:04:09.211 CXX test/cpp_headers/crc32.o 00:04:09.211 CC test/event/app_repeat/app_repeat.o 00:04:09.211 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.211 CC app/spdk_nvme_discover/discovery_aer.o 00:04:09.471 CC test/nvme/aer/aer.o 00:04:09.471 CC app/spdk_top/spdk_top.o 00:04:09.471 CXX test/cpp_headers/crc64.o 00:04:09.471 LINK memory_ut 00:04:09.471 LINK app_repeat 00:04:09.471 LINK spdk_nvme_perf 00:04:09.471 LINK interrupt_tgt 00:04:09.471 LINK spdk_nvme_discover 00:04:09.471 LINK spdk_nvme_identify 00:04:09.471 CXX test/cpp_headers/dif.o 00:04:09.731 LINK aer 00:04:09.731 CC test/env/pci/pci_ut.o 00:04:09.731 CC test/app/jsoncat/jsoncat.o 00:04:09.731 CC test/event/scheduler/scheduler.o 00:04:09.731 CXX test/cpp_headers/dma.o 00:04:09.731 CC test/app/stub/stub.o 00:04:09.991 LINK jsoncat 00:04:09.991 CC examples/thread/thread/thread_ex.o 00:04:09.991 CC examples/sock/hello_world/hello_sock.o 00:04:09.991 CXX test/cpp_headers/endian.o 00:04:09.991 CC test/nvme/reset/reset.o 00:04:09.991 LINK scheduler 00:04:09.991 LINK stub 00:04:09.991 LINK iscsi_fuzz 00:04:10.251 CXX test/cpp_headers/env_dpdk.o 00:04:10.251 LINK thread 00:04:10.251 LINK pci_ut 00:04:10.251 LINK hello_sock 00:04:10.251 CC test/accel/dif/dif.o 00:04:10.251 LINK reset 00:04:10.251 CXX test/cpp_headers/env.o 00:04:10.251 CC test/nvme/sgl/sgl.o 00:04:10.511 CC test/blobfs/mkfs/mkfs.o 00:04:10.511 LINK spdk_top 00:04:10.511 CXX test/cpp_headers/event.o 00:04:10.511 CC app/vhost/vhost.o 00:04:10.511 CC examples/nvme/hello_world/hello_world.o 00:04:10.511 LINK sgl 00:04:10.511 CC test/lvol/esnap/esnap.o 00:04:10.511 LINK mkfs 00:04:10.771 CXX test/cpp_headers/fd_group.o 00:04:10.771 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:10.771 CC examples/accel/perf/accel_perf.o 00:04:10.771 LINK vhost 00:04:10.771 CXX test/cpp_headers/fd.o 00:04:10.771 CC examples/blob/hello_world/hello_blob.o 00:04:10.771 CXX test/cpp_headers/file.o 00:04:10.771 LINK hello_world 00:04:10.771 CC test/nvme/e2edp/nvme_dp.o 00:04:10.771 LINK dif 00:04:11.031 LINK hello_fsdev 00:04:11.031 CXX test/cpp_headers/fsdev.o 00:04:11.031 CXX test/cpp_headers/fsdev_module.o 00:04:11.031 CC app/spdk_dd/spdk_dd.o 00:04:11.031 LINK hello_blob 00:04:11.031 CXX test/cpp_headers/ftl.o 00:04:11.031 CC examples/nvme/reconnect/reconnect.o 00:04:11.031 CXX test/cpp_headers/fuse_dispatcher.o 00:04:11.290 LINK nvme_dp 00:04:11.290 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:11.290 LINK accel_perf 00:04:11.290 CXX test/cpp_headers/gpt_spec.o 00:04:11.290 CC examples/nvme/arbitration/arbitration.o 00:04:11.290 CC examples/nvme/hotplug/hotplug.o 00:04:11.550 CC examples/blob/cli/blobcli.o 00:04:11.551 CXX test/cpp_headers/hexlify.o 00:04:11.551 CC test/nvme/overhead/overhead.o 00:04:11.551 LINK spdk_dd 00:04:11.551 LINK reconnect 00:04:11.551 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:11.551 CXX test/cpp_headers/histogram_data.o 00:04:11.551 LINK arbitration 00:04:11.551 LINK hotplug 00:04:11.809 CXX test/cpp_headers/idxd.o 00:04:11.809 LINK cmb_copy 00:04:11.809 LINK overhead 00:04:11.809 LINK nvme_manage 00:04:11.809 CC app/fio/nvme/fio_plugin.o 00:04:11.809 CXX test/cpp_headers/idxd_spec.o 00:04:11.809 CC examples/nvme/abort/abort.o 00:04:11.809 CXX test/cpp_headers/init.o 00:04:11.809 CXX test/cpp_headers/ioat.o 00:04:12.067 LINK blobcli 00:04:12.067 CC test/nvme/err_injection/err_injection.o 00:04:12.067 CC examples/bdev/hello_world/hello_bdev.o 00:04:12.067 CXX test/cpp_headers/ioat_spec.o 00:04:12.067 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.067 CC app/fio/bdev/fio_plugin.o 00:04:12.067 CC examples/bdev/bdevperf/bdevperf.o 00:04:12.326 CXX test/cpp_headers/iscsi_spec.o 00:04:12.326 LINK err_injection 00:04:12.326 LINK pmr_persistence 00:04:12.326 LINK abort 00:04:12.326 LINK hello_bdev 00:04:12.326 CXX test/cpp_headers/json.o 00:04:12.586 LINK spdk_nvme 00:04:12.586 CC test/bdev/bdevio/bdevio.o 00:04:12.586 CXX test/cpp_headers/jsonrpc.o 00:04:12.586 CC test/nvme/startup/startup.o 00:04:12.586 CXX test/cpp_headers/keyring.o 00:04:12.586 CXX test/cpp_headers/keyring_module.o 00:04:12.586 CXX test/cpp_headers/likely.o 00:04:12.586 CXX test/cpp_headers/log.o 00:04:12.586 CXX test/cpp_headers/lvol.o 00:04:12.846 CXX test/cpp_headers/memory.o 00:04:12.846 CXX test/cpp_headers/mmio.o 00:04:12.846 LINK startup 00:04:12.846 CXX test/cpp_headers/nbd.o 00:04:12.846 LINK spdk_bdev 00:04:12.846 CC test/nvme/reserve/reserve.o 00:04:12.846 CXX test/cpp_headers/net.o 00:04:12.846 CXX test/cpp_headers/notify.o 00:04:12.846 LINK bdevio 00:04:12.846 CXX test/cpp_headers/nvme.o 00:04:12.846 CXX test/cpp_headers/nvme_intel.o 00:04:12.846 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.846 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:12.846 CC test/nvme/simple_copy/simple_copy.o 00:04:13.105 LINK reserve 00:04:13.105 CC test/nvme/connect_stress/connect_stress.o 00:04:13.105 CXX test/cpp_headers/nvme_spec.o 00:04:13.105 CC test/nvme/boot_partition/boot_partition.o 00:04:13.105 CC test/nvme/compliance/nvme_compliance.o 00:04:13.105 CC test/nvme/fused_ordering/fused_ordering.o 00:04:13.105 LINK bdevperf 00:04:13.105 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:13.365 LINK simple_copy 00:04:13.365 LINK connect_stress 00:04:13.365 CC test/nvme/fdp/fdp.o 00:04:13.365 CXX test/cpp_headers/nvme_zns.o 00:04:13.365 LINK boot_partition 00:04:13.365 LINK fused_ordering 00:04:13.365 CXX test/cpp_headers/nvmf_cmd.o 00:04:13.365 LINK doorbell_aers 00:04:13.623 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:13.623 CXX test/cpp_headers/nvmf.o 00:04:13.624 CC test/nvme/cuse/cuse.o 00:04:13.624 LINK nvme_compliance 00:04:13.624 CXX test/cpp_headers/nvmf_spec.o 00:04:13.624 CXX test/cpp_headers/nvmf_transport.o 00:04:13.624 LINK fdp 00:04:13.624 CXX test/cpp_headers/opal.o 00:04:13.624 CC examples/nvmf/nvmf/nvmf.o 00:04:13.882 CXX test/cpp_headers/opal_spec.o 00:04:13.882 CXX test/cpp_headers/pci_ids.o 00:04:13.882 CXX test/cpp_headers/pipe.o 00:04:13.882 CXX test/cpp_headers/queue.o 00:04:13.882 CXX test/cpp_headers/reduce.o 00:04:13.882 CXX test/cpp_headers/rpc.o 00:04:13.882 CXX test/cpp_headers/scheduler.o 00:04:13.882 CXX test/cpp_headers/scsi.o 00:04:13.882 CXX test/cpp_headers/scsi_spec.o 00:04:13.882 CXX test/cpp_headers/sock.o 00:04:13.882 CXX test/cpp_headers/stdinc.o 00:04:13.882 CXX test/cpp_headers/string.o 00:04:14.140 LINK nvmf 00:04:14.140 CXX test/cpp_headers/thread.o 00:04:14.140 CXX test/cpp_headers/trace.o 00:04:14.140 CXX test/cpp_headers/trace_parser.o 00:04:14.140 CXX test/cpp_headers/tree.o 00:04:14.140 CXX test/cpp_headers/ublk.o 00:04:14.140 CXX test/cpp_headers/util.o 00:04:14.140 CXX test/cpp_headers/uuid.o 00:04:14.140 CXX test/cpp_headers/version.o 00:04:14.140 CXX test/cpp_headers/vfio_user_pci.o 00:04:14.140 CXX test/cpp_headers/vfio_user_spec.o 00:04:14.140 CXX test/cpp_headers/vhost.o 00:04:14.140 CXX test/cpp_headers/vmd.o 00:04:14.140 CXX test/cpp_headers/xor.o 00:04:14.140 CXX test/cpp_headers/zipf.o 00:04:15.074 LINK cuse 00:04:16.975 LINK esnap 00:04:17.541 00:04:17.541 real 0m55.812s 00:04:17.541 user 4m57.269s 00:04:17.541 sys 1m6.253s 00:04:17.541 05:59:19 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:17.541 05:59:19 make -- common/autotest_common.sh@10 -- $ set +x 00:04:17.541 ************************************ 00:04:17.541 END TEST make 00:04:17.541 ************************************ 00:04:17.541 05:59:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:17.541 05:59:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:17.541 05:59:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:17.541 05:59:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.541 05:59:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:17.541 05:59:19 -- pm/common@44 -- $ pid=6192 00:04:17.541 05:59:19 -- pm/common@50 -- $ kill -TERM 6192 00:04:17.541 05:59:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.541 05:59:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:17.541 05:59:19 -- pm/common@44 -- $ pid=6194 00:04:17.541 05:59:19 -- pm/common@50 -- $ kill -TERM 6194 00:04:17.541 05:59:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:17.541 05:59:19 -- nvmf/common.sh@7 -- # uname -s 00:04:17.800 05:59:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.800 05:59:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.800 05:59:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.800 05:59:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.800 05:59:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.800 05:59:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.800 05:59:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.800 05:59:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.800 05:59:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.800 05:59:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.800 05:59:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:23d47d08-ae40-4abe-a772-375a792e023f 00:04:17.800 05:59:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=23d47d08-ae40-4abe-a772-375a792e023f 00:04:17.800 05:59:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.800 05:59:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.800 05:59:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:17.800 05:59:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.800 05:59:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:17.800 05:59:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.800 05:59:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.800 05:59:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.800 05:59:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.800 05:59:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.800 05:59:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.800 05:59:19 -- paths/export.sh@5 -- # export PATH 00:04:17.800 05:59:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.800 05:59:19 -- nvmf/common.sh@47 -- # : 0 00:04:17.800 05:59:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:17.800 05:59:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:17.800 05:59:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.800 05:59:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.800 05:59:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.800 05:59:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:17.800 05:59:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:17.800 05:59:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:17.800 05:59:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:17.800 05:59:19 -- spdk/autotest.sh@32 -- # uname -s 00:04:17.800 05:59:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:17.800 05:59:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:17.800 05:59:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:17.800 05:59:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:17.800 05:59:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:17.800 05:59:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:17.800 05:59:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:17.800 05:59:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:17.800 05:59:19 -- spdk/autotest.sh@48 -- # udevadm_pid=65581 00:04:17.800 05:59:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:17.800 05:59:19 -- pm/common@17 -- # local monitor 00:04:17.800 05:59:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.800 05:59:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.800 05:59:19 -- pm/common@25 -- # sleep 1 00:04:17.800 05:59:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:17.800 05:59:19 -- pm/common@21 -- # date +%s 00:04:17.800 05:59:19 -- pm/common@21 -- # date +%s 00:04:17.800 05:59:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723528759 00:04:17.800 05:59:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723528759 00:04:17.800 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723528759_collect-vmstat.pm.log 00:04:17.800 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723528759_collect-cpu-load.pm.log 00:04:18.736 05:59:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:18.736 05:59:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:18.736 05:59:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:18.736 05:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.736 05:59:20 -- spdk/autotest.sh@59 -- # create_test_list 00:04:18.736 05:59:20 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:18.736 05:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.736 05:59:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:18.996 05:59:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:18.996 05:59:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:18.996 05:59:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:18.996 05:59:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:18.996 05:59:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:18.996 05:59:20 -- common/autotest_common.sh@1451 -- # uname 00:04:18.996 05:59:20 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:18.996 05:59:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:18.996 05:59:20 -- common/autotest_common.sh@1471 -- # uname 00:04:18.996 05:59:20 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:18.996 05:59:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:18.996 05:59:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:18.996 05:59:20 -- spdk/autotest.sh@72 -- # hash lcov 00:04:18.996 05:59:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:18.996 05:59:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:18.996 --rc lcov_branch_coverage=1 00:04:18.996 --rc lcov_function_coverage=1 00:04:18.996 --rc genhtml_branch_coverage=1 00:04:18.996 --rc genhtml_function_coverage=1 00:04:18.996 --rc genhtml_legend=1 00:04:18.996 --rc geninfo_all_blocks=1 00:04:18.996 ' 00:04:18.996 05:59:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:18.996 --rc lcov_branch_coverage=1 00:04:18.996 --rc lcov_function_coverage=1 00:04:18.996 --rc genhtml_branch_coverage=1 00:04:18.996 --rc genhtml_function_coverage=1 00:04:18.996 --rc genhtml_legend=1 00:04:18.996 --rc geninfo_all_blocks=1 00:04:18.996 ' 00:04:18.996 05:59:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:18.996 --rc lcov_branch_coverage=1 00:04:18.996 --rc lcov_function_coverage=1 00:04:18.996 --rc genhtml_branch_coverage=1 00:04:18.996 --rc genhtml_function_coverage=1 00:04:18.996 --rc genhtml_legend=1 00:04:18.996 --rc geninfo_all_blocks=1 00:04:18.996 --no-external' 00:04:18.996 05:59:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:18.996 --rc lcov_branch_coverage=1 00:04:18.996 --rc lcov_function_coverage=1 00:04:18.996 --rc genhtml_branch_coverage=1 00:04:18.996 --rc genhtml_function_coverage=1 00:04:18.996 --rc genhtml_legend=1 00:04:18.996 --rc geninfo_all_blocks=1 00:04:18.996 --no-external' 00:04:18.996 05:59:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:18.996 lcov: LCOV version 1.15 00:04:18.996 05:59:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:33.904 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:33.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev_module.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fsdev_module.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fuse_dispatcher.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fuse_dispatcher.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:46.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:46.119 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:46.120 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:46.120 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:49.412 05:59:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:49.412 05:59:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:49.412 05:59:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.412 05:59:50 -- spdk/autotest.sh@91 -- # rm -f 00:04:49.412 05:59:50 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.982 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:49.982 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:49.982 05:59:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:49.982 05:59:51 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:49.982 05:59:51 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:49.982 05:59:51 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:49.982 05:59:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:49.982 05:59:51 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:49.982 05:59:51 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:49.982 05:59:51 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:49.982 05:59:51 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:49.982 05:59:51 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:49.982 05:59:51 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:49.982 05:59:51 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:49.982 05:59:51 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:49.982 05:59:51 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:49.982 05:59:51 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:49.982 05:59:51 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:49.982 05:59:51 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:49.982 05:59:51 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:49.982 05:59:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:49.982 05:59:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.982 05:59:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:49.982 05:59:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:49.982 05:59:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:49.982 05:59:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:50.241 No valid GPT data, bailing 00:04:50.241 05:59:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.241 05:59:51 -- scripts/common.sh@391 -- # pt= 00:04:50.241 05:59:51 -- scripts/common.sh@392 -- # return 1 00:04:50.241 05:59:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:50.241 1+0 records in 00:04:50.241 1+0 records out 00:04:50.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443642 s, 236 MB/s 00:04:50.241 05:59:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.241 05:59:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:50.241 05:59:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:50.241 05:59:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:50.242 05:59:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:50.242 No valid GPT data, bailing 00:04:50.242 05:59:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:50.242 05:59:51 -- scripts/common.sh@391 -- # pt= 00:04:50.242 05:59:51 -- scripts/common.sh@392 -- # return 1 00:04:50.242 05:59:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:50.242 1+0 records in 00:04:50.242 1+0 records out 00:04:50.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624826 s, 168 MB/s 00:04:50.242 05:59:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.242 05:59:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:50.242 05:59:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:50.242 05:59:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:50.242 05:59:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:50.242 No valid GPT data, bailing 00:04:50.242 05:59:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:50.242 05:59:51 -- scripts/common.sh@391 -- # pt= 00:04:50.242 05:59:51 -- scripts/common.sh@392 -- # return 1 00:04:50.242 05:59:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:50.242 1+0 records in 00:04:50.242 1+0 records out 00:04:50.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413656 s, 253 MB/s 00:04:50.242 05:59:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.242 05:59:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:50.242 05:59:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:50.242 05:59:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:50.242 05:59:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:50.501 No valid GPT data, bailing 00:04:50.501 05:59:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:50.501 05:59:52 -- scripts/common.sh@391 -- # pt= 00:04:50.501 05:59:52 -- scripts/common.sh@392 -- # return 1 00:04:50.501 05:59:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:50.501 1+0 records in 00:04:50.501 1+0 records out 00:04:50.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463733 s, 226 MB/s 00:04:50.501 05:59:52 -- spdk/autotest.sh@118 -- # sync 00:04:50.501 05:59:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:50.501 05:59:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:50.501 05:59:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:53.793 05:59:55 -- spdk/autotest.sh@124 -- # uname -s 00:04:53.793 05:59:55 -- spdk/autotest.sh@124 -- # [[ Linux == Linux ]] 00:04:53.793 05:59:55 -- spdk/autotest.sh@124 -- # [[ 0 -eq 1 ]] 00:04:53.793 05:59:55 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:54.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.053 Hugepages 00:04:54.053 node hugesize free / total 00:04:54.053 node0 1048576kB 0 / 0 00:04:54.053 node0 2048kB 0 / 0 00:04:54.053 00:04:54.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.313 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:54.313 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:54.573 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:54.573 05:59:56 -- spdk/autotest.sh@130 -- # uname -s 00:04:54.573 05:59:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:54.573 05:59:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:54.573 05:59:56 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.513 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.513 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.513 05:59:57 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:56.453 05:59:58 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:56.453 05:59:58 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:56.453 05:59:58 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:56.453 05:59:58 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:56.453 05:59:58 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:56.453 05:59:58 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:56.453 05:59:58 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.453 05:59:58 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:56.453 05:59:58 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:56.712 05:59:58 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:04:56.712 05:59:58 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:56.712 05:59:58 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.238 Waiting for block devices as requested 00:04:57.238 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.238 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.238 05:59:58 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:57.238 05:59:58 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:57.238 05:59:58 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:04:57.238 05:59:58 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:57.238 05:59:59 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.238 05:59:59 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:57.238 05:59:59 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.238 05:59:59 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:04:57.238 05:59:59 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:04:57.238 05:59:59 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:04:57.238 05:59:59 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:04:57.238 05:59:59 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:57.238 05:59:59 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:57.509 05:59:59 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:57.509 05:59:59 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:57.509 05:59:59 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:57.509 05:59:59 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:57.509 05:59:59 -- common/autotest_common.sh@1553 -- # continue 00:04:57.509 05:59:59 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:57.509 05:59:59 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:57.509 05:59:59 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:04:57.509 05:59:59 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:57.509 05:59:59 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:57.509 05:59:59 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:57.509 05:59:59 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:57.509 05:59:59 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:57.509 05:59:59 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:57.509 05:59:59 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:57.509 05:59:59 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:57.509 05:59:59 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:57.509 05:59:59 -- common/autotest_common.sh@1553 -- # continue 00:04:57.509 05:59:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:57.509 05:59:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.509 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:57.509 05:59:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:57.509 05:59:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:57.509 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:57.509 05:59:59 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.448 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.448 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.448 06:00:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:58.448 06:00:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.448 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.448 06:00:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:58.448 06:00:00 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:58.448 06:00:00 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.448 06:00:00 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:58.448 06:00:00 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:58.448 06:00:00 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:58.448 06:00:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:58.448 06:00:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:58.448 06:00:00 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.448 06:00:00 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.448 06:00:00 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:58.707 06:00:00 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:04:58.707 06:00:00 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:58.707 06:00:00 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:58.707 06:00:00 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.707 06:00:00 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:58.707 06:00:00 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.707 06:00:00 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:58.707 06:00:00 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.707 06:00:00 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:58.707 06:00:00 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.707 06:00:00 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:58.707 06:00:00 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:58.707 06:00:00 -- common/autotest_common.sh@1589 -- # return 0 00:04:58.707 06:00:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:58.707 06:00:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:58.707 06:00:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:58.707 06:00:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:58.707 06:00:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:58.707 06:00:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:58.707 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.707 06:00:00 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:58.707 06:00:00 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.707 06:00:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.707 06:00:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.707 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.707 ************************************ 00:04:58.707 START TEST env 00:04:58.707 ************************************ 00:04:58.707 06:00:00 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.707 * Looking for test storage... 00:04:58.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:58.707 06:00:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.707 06:00:00 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.707 06:00:00 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.707 06:00:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.707 ************************************ 00:04:58.707 START TEST env_memory 00:04:58.707 ************************************ 00:04:58.707 06:00:00 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.967 00:04:58.967 00:04:58.967 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.967 http://cunit.sourceforge.net/ 00:04:58.967 00:04:58.967 00:04:58.967 Suite: memory 00:04:58.967 Test: alloc and free memory map ...[2024-08-13 06:00:00.537654] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.967 passed 00:04:58.967 Test: mem map translation ...[2024-08-13 06:00:00.580073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.967 [2024-08-13 06:00:00.580162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.967 [2024-08-13 06:00:00.580247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.967 [2024-08-13 06:00:00.580307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.967 passed 00:04:58.967 Test: mem map registration ...[2024-08-13 06:00:00.648829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:58.967 [2024-08-13 06:00:00.648911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:58.967 passed 00:04:58.967 Test: mem map adjacent registrations ...passed 00:04:58.967 00:04:58.967 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.967 suites 1 1 n/a 0 0 00:04:58.967 tests 4 4 4 0 0 00:04:58.967 asserts 152 152 152 0 n/a 00:04:58.967 00:04:58.967 Elapsed time = 0.238 seconds 00:04:59.227 00:04:59.227 real 0m0.291s 00:04:59.227 user 0m0.257s 00:04:59.227 sys 0m0.023s 00:04:59.227 06:00:00 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.227 06:00:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.227 ************************************ 00:04:59.227 END TEST env_memory 00:04:59.227 ************************************ 00:04:59.227 06:00:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.227 06:00:00 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.227 06:00:00 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.227 06:00:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.227 ************************************ 00:04:59.227 START TEST env_vtophys 00:04:59.227 ************************************ 00:04:59.227 06:00:00 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.227 EAL: lib.eal log level changed from notice to debug 00:04:59.227 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.227 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.227 EAL: Maximum logical cores by configuration: 128 00:04:59.227 EAL: Detected CPU lcores: 10 00:04:59.227 EAL: Detected NUMA nodes: 1 00:04:59.227 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:59.227 EAL: Detected shared linkage of DPDK 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:59.227 EAL: Registered [vdev] bus. 00:04:59.227 EAL: bus.vdev log level changed from disabled to notice 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:59.227 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:59.227 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:59.227 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:59.227 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.227 EAL: No shared files mode enabled, IPC is disabled 00:04:59.227 EAL: Selected IOVA mode 'PA' 00:04:59.228 EAL: Probing VFIO support... 00:04:59.228 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.228 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.228 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.228 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.228 EAL: Setting up physically contiguous memory... 00:04:59.228 EAL: Setting maximum number of open files to 524288 00:04:59.228 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.228 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.228 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.228 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.228 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.228 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.228 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.228 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.228 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.228 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.228 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.228 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.228 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.228 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.228 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.228 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.228 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.228 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.228 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.228 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.228 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.228 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.228 EAL: Hugepages will be freed exactly as allocated. 00:04:59.228 EAL: No shared files mode enabled, IPC is disabled 00:04:59.228 EAL: No shared files mode enabled, IPC is disabled 00:04:59.228 EAL: TSC frequency is ~2290000 KHz 00:04:59.228 EAL: Main lcore 0 is ready (tid=7f1a2d685a40;cpuset=[0]) 00:04:59.228 EAL: Trying to obtain current memory policy. 00:04:59.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.228 EAL: Restoring previous memory policy: 0 00:04:59.228 EAL: request: mp_malloc_sync 00:04:59.228 EAL: No shared files mode enabled, IPC is disabled 00:04:59.228 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.228 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.228 EAL: No shared files mode enabled, IPC is disabled 00:04:59.228 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.228 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.228 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:59.228 00:04:59.228 00:04:59.228 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.228 http://cunit.sourceforge.net/ 00:04:59.228 00:04:59.228 00:04:59.228 Suite: components_suite 00:04:59.796 Test: vtophys_malloc_test ...passed 00:04:59.796 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.796 EAL: Trying to obtain current memory policy. 00:04:59.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.796 EAL: Restoring previous memory policy: 4 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.796 EAL: request: mp_malloc_sync 00:04:59.796 EAL: No shared files mode enabled, IPC is disabled 00:04:59.796 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.055 EAL: request: mp_malloc_sync 00:05:00.055 EAL: No shared files mode enabled, IPC is disabled 00:05:00.055 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.055 EAL: Trying to obtain current memory policy. 00:05:00.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.055 EAL: Restoring previous memory policy: 4 00:05:00.055 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.055 EAL: request: mp_malloc_sync 00:05:00.055 EAL: No shared files mode enabled, IPC is disabled 00:05:00.055 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.055 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.315 EAL: request: mp_malloc_sync 00:05:00.315 EAL: No shared files mode enabled, IPC is disabled 00:05:00.315 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.315 EAL: Trying to obtain current memory policy. 00:05:00.315 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.315 EAL: Restoring previous memory policy: 4 00:05:00.315 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.315 EAL: request: mp_malloc_sync 00:05:00.315 EAL: No shared files mode enabled, IPC is disabled 00:05:00.315 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.835 passed 00:05:00.835 00:05:00.835 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.835 suites 1 1 n/a 0 0 00:05:00.835 tests 2 2 2 0 0 00:05:00.835 asserts 5218 5218 5218 0 n/a 00:05:00.835 00:05:00.835 Elapsed time = 1.338 seconds 00:05:00.835 EAL: request: mp_malloc_sync 00:05:00.835 EAL: No shared files mode enabled, IPC is disabled 00:05:00.835 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.835 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.835 EAL: request: mp_malloc_sync 00:05:00.835 EAL: No shared files mode enabled, IPC is disabled 00:05:00.835 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.835 EAL: No shared files mode enabled, IPC is disabled 00:05:00.835 EAL: No shared files mode enabled, IPC is disabled 00:05:00.835 EAL: No shared files mode enabled, IPC is disabled 00:05:00.835 00:05:00.835 real 0m1.582s 00:05:00.835 user 0m0.752s 00:05:00.835 sys 0m0.698s 00:05:00.835 06:00:02 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.835 06:00:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 END TEST env_vtophys 00:05:00.835 ************************************ 00:05:00.835 06:00:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.835 06:00:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.835 06:00:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.835 06:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 START TEST env_pci 00:05:00.835 ************************************ 00:05:00.835 06:00:02 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.835 00:05:00.835 00:05:00.835 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.835 http://cunit.sourceforge.net/ 00:05:00.835 00:05:00.835 00:05:00.835 Suite: pci 00:05:00.835 Test: pci_hook ...[2024-08-13 06:00:02.501281] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67920 has claimed it 00:05:00.835 EAL: Cannot find device (10000:00:01.0) 00:05:00.835 EAL: Failed to attach device on primary process 00:05:00.835 passed 00:05:00.835 00:05:00.835 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.835 suites 1 1 n/a 0 0 00:05:00.835 tests 1 1 1 0 0 00:05:00.835 asserts 25 25 25 0 n/a 00:05:00.835 00:05:00.835 Elapsed time = 0.007 seconds 00:05:00.835 00:05:00.835 real 0m0.090s 00:05:00.835 user 0m0.043s 00:05:00.835 sys 0m0.046s 00:05:00.835 06:00:02 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.836 ************************************ 00:05:00.836 END TEST env_pci 00:05:00.836 ************************************ 00:05:00.836 06:00:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.836 06:00:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.836 06:00:02 env -- env/env.sh@15 -- # uname 00:05:00.836 06:00:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.836 06:00:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.836 06:00:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.836 06:00:02 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:00.836 06:00:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.836 06:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.095 ************************************ 00:05:01.095 START TEST env_dpdk_post_init 00:05:01.095 ************************************ 00:05:01.095 06:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.095 EAL: Detected CPU lcores: 10 00:05:01.095 EAL: Detected NUMA nodes: 1 00:05:01.095 EAL: Detected shared linkage of DPDK 00:05:01.095 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.095 EAL: Selected IOVA mode 'PA' 00:05:01.095 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.095 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:01.095 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:01.095 Starting DPDK initialization... 00:05:01.095 Starting SPDK post initialization... 00:05:01.095 SPDK NVMe probe 00:05:01.095 Attaching to 0000:00:10.0 00:05:01.095 Attaching to 0000:00:11.0 00:05:01.095 Attached to 0000:00:10.0 00:05:01.095 Attached to 0000:00:11.0 00:05:01.095 Cleaning up... 00:05:01.095 ************************************ 00:05:01.095 END TEST env_dpdk_post_init 00:05:01.095 ************************************ 00:05:01.095 00:05:01.095 real 0m0.229s 00:05:01.095 user 0m0.064s 00:05:01.095 sys 0m0.067s 00:05:01.095 06:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.095 06:00:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.355 06:00:02 env -- env/env.sh@26 -- # uname 00:05:01.355 06:00:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.355 06:00:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.355 06:00:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.355 06:00:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.355 06:00:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.355 ************************************ 00:05:01.355 START TEST env_mem_callbacks 00:05:01.355 ************************************ 00:05:01.355 06:00:02 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.355 EAL: Detected CPU lcores: 10 00:05:01.355 EAL: Detected NUMA nodes: 1 00:05:01.355 EAL: Detected shared linkage of DPDK 00:05:01.355 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.355 EAL: Selected IOVA mode 'PA' 00:05:01.355 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.355 00:05:01.355 00:05:01.355 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.355 http://cunit.sourceforge.net/ 00:05:01.355 00:05:01.355 00:05:01.355 Suite: memory 00:05:01.355 Test: test ... 00:05:01.355 register 0x200000200000 2097152 00:05:01.355 malloc 3145728 00:05:01.355 register 0x200000400000 4194304 00:05:01.355 buf 0x200000500000 len 3145728 PASSED 00:05:01.355 malloc 64 00:05:01.355 buf 0x2000004fff40 len 64 PASSED 00:05:01.355 malloc 4194304 00:05:01.355 register 0x200000800000 6291456 00:05:01.355 buf 0x200000a00000 len 4194304 PASSED 00:05:01.355 free 0x200000500000 3145728 00:05:01.355 free 0x2000004fff40 64 00:05:01.355 unregister 0x200000400000 4194304 PASSED 00:05:01.355 free 0x200000a00000 4194304 00:05:01.355 unregister 0x200000800000 6291456 PASSED 00:05:01.355 malloc 8388608 00:05:01.355 register 0x200000400000 10485760 00:05:01.355 buf 0x200000600000 len 8388608 PASSED 00:05:01.355 free 0x200000600000 8388608 00:05:01.355 unregister 0x200000400000 10485760 PASSED 00:05:01.355 passed 00:05:01.355 00:05:01.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.355 suites 1 1 n/a 0 0 00:05:01.355 tests 1 1 1 0 0 00:05:01.355 asserts 15 15 15 0 n/a 00:05:01.355 00:05:01.355 Elapsed time = 0.011 seconds 00:05:01.355 00:05:01.355 real 0m0.181s 00:05:01.355 user 0m0.030s 00:05:01.355 sys 0m0.048s 00:05:01.355 06:00:03 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.355 06:00:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.355 ************************************ 00:05:01.355 END TEST env_mem_callbacks 00:05:01.355 ************************************ 00:05:01.614 00:05:01.614 real 0m2.832s 00:05:01.614 user 0m1.301s 00:05:01.614 sys 0m1.195s 00:05:01.614 06:00:03 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.614 ************************************ 00:05:01.614 END TEST env 00:05:01.614 ************************************ 00:05:01.614 06:00:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.614 06:00:03 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.614 06:00:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.614 06:00:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.614 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:01.614 ************************************ 00:05:01.614 START TEST rpc 00:05:01.614 ************************************ 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.614 * Looking for test storage... 00:05:01.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.614 06:00:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68039 00:05:01.614 06:00:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.614 06:00:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.614 06:00:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68039 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@827 -- # '[' -z 68039 ']' 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:01.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:01.614 06:00:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.874 [2024-08-13 06:00:03.442621] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:01.874 [2024-08-13 06:00:03.442749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68039 ] 00:05:01.874 [2024-08-13 06:00:03.589553] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.874 [2024-08-13 06:00:03.640781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.874 [2024-08-13 06:00:03.640843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68039' to capture a snapshot of events at runtime. 00:05:01.874 [2024-08-13 06:00:03.640863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.874 [2024-08-13 06:00:03.640884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.874 [2024-08-13 06:00:03.640893] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68039 for offline analysis/debug. 00:05:01.874 [2024-08-13 06:00:03.640941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.810 06:00:04 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.810 06:00:04 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:02.810 06:00:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.810 06:00:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.810 06:00:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.810 06:00:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.810 06:00:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.810 06:00:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.810 06:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 ************************************ 00:05:02.810 START TEST rpc_integrity 00:05:02.810 ************************************ 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.810 { 00:05:02.810 "name": "Malloc0", 00:05:02.810 "aliases": [ 00:05:02.810 "a93e619f-fdce-48cb-986e-4dbc4d0716bc" 00:05:02.810 ], 00:05:02.810 "product_name": "Malloc disk", 00:05:02.810 "block_size": 512, 00:05:02.810 "num_blocks": 16384, 00:05:02.810 "uuid": "a93e619f-fdce-48cb-986e-4dbc4d0716bc", 00:05:02.810 "assigned_rate_limits": { 00:05:02.810 "rw_ios_per_sec": 0, 00:05:02.810 "rw_mbytes_per_sec": 0, 00:05:02.810 "r_mbytes_per_sec": 0, 00:05:02.810 "w_mbytes_per_sec": 0 00:05:02.810 }, 00:05:02.810 "claimed": false, 00:05:02.810 "zoned": false, 00:05:02.810 "supported_io_types": { 00:05:02.810 "read": true, 00:05:02.810 "write": true, 00:05:02.810 "unmap": true, 00:05:02.810 "flush": true, 00:05:02.810 "reset": true, 00:05:02.810 "nvme_admin": false, 00:05:02.810 "nvme_io": false, 00:05:02.810 "nvme_io_md": false, 00:05:02.810 "write_zeroes": true, 00:05:02.810 "zcopy": true, 00:05:02.810 "get_zone_info": false, 00:05:02.810 "zone_management": false, 00:05:02.810 "zone_append": false, 00:05:02.810 "compare": false, 00:05:02.810 "compare_and_write": false, 00:05:02.810 "abort": true, 00:05:02.810 "seek_hole": false, 00:05:02.810 "seek_data": false, 00:05:02.810 "copy": true, 00:05:02.810 "nvme_iov_md": false 00:05:02.810 }, 00:05:02.810 "memory_domains": [ 00:05:02.810 { 00:05:02.810 "dma_device_id": "system", 00:05:02.810 "dma_device_type": 1 00:05:02.810 }, 00:05:02.810 { 00:05:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.810 "dma_device_type": 2 00:05:02.810 } 00:05:02.810 ], 00:05:02.810 "driver_specific": {} 00:05:02.810 } 00:05:02.810 ]' 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 [2024-08-13 06:00:04.417370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.810 [2024-08-13 06:00:04.417526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.810 [2024-08-13 06:00:04.417578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:02.810 [2024-08-13 06:00:04.417602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.810 [2024-08-13 06:00:04.419967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.810 [2024-08-13 06:00:04.420016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.810 Passthru0 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.810 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.810 { 00:05:02.810 "name": "Malloc0", 00:05:02.810 "aliases": [ 00:05:02.810 "a93e619f-fdce-48cb-986e-4dbc4d0716bc" 00:05:02.810 ], 00:05:02.810 "product_name": "Malloc disk", 00:05:02.810 "block_size": 512, 00:05:02.810 "num_blocks": 16384, 00:05:02.810 "uuid": "a93e619f-fdce-48cb-986e-4dbc4d0716bc", 00:05:02.810 "assigned_rate_limits": { 00:05:02.810 "rw_ios_per_sec": 0, 00:05:02.810 "rw_mbytes_per_sec": 0, 00:05:02.810 "r_mbytes_per_sec": 0, 00:05:02.810 "w_mbytes_per_sec": 0 00:05:02.810 }, 00:05:02.810 "claimed": true, 00:05:02.810 "claim_type": "exclusive_write", 00:05:02.810 "zoned": false, 00:05:02.810 "supported_io_types": { 00:05:02.810 "read": true, 00:05:02.810 "write": true, 00:05:02.810 "unmap": true, 00:05:02.810 "flush": true, 00:05:02.810 "reset": true, 00:05:02.810 "nvme_admin": false, 00:05:02.810 "nvme_io": false, 00:05:02.810 "nvme_io_md": false, 00:05:02.810 "write_zeroes": true, 00:05:02.810 "zcopy": true, 00:05:02.810 "get_zone_info": false, 00:05:02.810 "zone_management": false, 00:05:02.810 "zone_append": false, 00:05:02.810 "compare": false, 00:05:02.810 "compare_and_write": false, 00:05:02.810 "abort": true, 00:05:02.810 "seek_hole": false, 00:05:02.810 "seek_data": false, 00:05:02.810 "copy": true, 00:05:02.810 "nvme_iov_md": false 00:05:02.810 }, 00:05:02.810 "memory_domains": [ 00:05:02.810 { 00:05:02.810 "dma_device_id": "system", 00:05:02.810 "dma_device_type": 1 00:05:02.810 }, 00:05:02.810 { 00:05:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.810 "dma_device_type": 2 00:05:02.810 } 00:05:02.810 ], 00:05:02.810 "driver_specific": {} 00:05:02.810 }, 00:05:02.810 { 00:05:02.810 "name": "Passthru0", 00:05:02.810 "aliases": [ 00:05:02.810 "819e37d9-4f0d-5701-9abd-3cad78a96a74" 00:05:02.810 ], 00:05:02.810 "product_name": "passthru", 00:05:02.810 "block_size": 512, 00:05:02.810 "num_blocks": 16384, 00:05:02.810 "uuid": "819e37d9-4f0d-5701-9abd-3cad78a96a74", 00:05:02.810 "assigned_rate_limits": { 00:05:02.810 "rw_ios_per_sec": 0, 00:05:02.810 "rw_mbytes_per_sec": 0, 00:05:02.810 "r_mbytes_per_sec": 0, 00:05:02.810 "w_mbytes_per_sec": 0 00:05:02.810 }, 00:05:02.810 "claimed": false, 00:05:02.810 "zoned": false, 00:05:02.810 "supported_io_types": { 00:05:02.810 "read": true, 00:05:02.810 "write": true, 00:05:02.810 "unmap": true, 00:05:02.810 "flush": true, 00:05:02.810 "reset": true, 00:05:02.810 "nvme_admin": false, 00:05:02.810 "nvme_io": false, 00:05:02.810 "nvme_io_md": false, 00:05:02.810 "write_zeroes": true, 00:05:02.810 "zcopy": true, 00:05:02.810 "get_zone_info": false, 00:05:02.810 "zone_management": false, 00:05:02.810 "zone_append": false, 00:05:02.810 "compare": false, 00:05:02.810 "compare_and_write": false, 00:05:02.810 "abort": true, 00:05:02.810 "seek_hole": false, 00:05:02.810 "seek_data": false, 00:05:02.810 "copy": true, 00:05:02.810 "nvme_iov_md": false 00:05:02.810 }, 00:05:02.810 "memory_domains": [ 00:05:02.810 { 00:05:02.810 "dma_device_id": "system", 00:05:02.810 "dma_device_type": 1 00:05:02.810 }, 00:05:02.810 { 00:05:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.810 "dma_device_type": 2 00:05:02.810 } 00:05:02.810 ], 00:05:02.810 "driver_specific": { 00:05:02.810 "passthru": { 00:05:02.810 "name": "Passthru0", 00:05:02.810 "base_bdev_name": "Malloc0" 00:05:02.810 } 00:05:02.810 } 00:05:02.810 } 00:05:02.811 ]' 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.811 ************************************ 00:05:02.811 END TEST rpc_integrity 00:05:02.811 ************************************ 00:05:02.811 06:00:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.811 00:05:02.811 real 0m0.314s 00:05:02.811 user 0m0.191s 00:05:02.811 sys 0m0.057s 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.811 06:00:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.069 06:00:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.070 06:00:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.070 06:00:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.070 06:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.070 ************************************ 00:05:03.070 START TEST rpc_plugins 00:05:03.070 ************************************ 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.070 { 00:05:03.070 "name": "Malloc1", 00:05:03.070 "aliases": [ 00:05:03.070 "305f3726-64a1-4284-91a5-09e1bc71ad8a" 00:05:03.070 ], 00:05:03.070 "product_name": "Malloc disk", 00:05:03.070 "block_size": 4096, 00:05:03.070 "num_blocks": 256, 00:05:03.070 "uuid": "305f3726-64a1-4284-91a5-09e1bc71ad8a", 00:05:03.070 "assigned_rate_limits": { 00:05:03.070 "rw_ios_per_sec": 0, 00:05:03.070 "rw_mbytes_per_sec": 0, 00:05:03.070 "r_mbytes_per_sec": 0, 00:05:03.070 "w_mbytes_per_sec": 0 00:05:03.070 }, 00:05:03.070 "claimed": false, 00:05:03.070 "zoned": false, 00:05:03.070 "supported_io_types": { 00:05:03.070 "read": true, 00:05:03.070 "write": true, 00:05:03.070 "unmap": true, 00:05:03.070 "flush": true, 00:05:03.070 "reset": true, 00:05:03.070 "nvme_admin": false, 00:05:03.070 "nvme_io": false, 00:05:03.070 "nvme_io_md": false, 00:05:03.070 "write_zeroes": true, 00:05:03.070 "zcopy": true, 00:05:03.070 "get_zone_info": false, 00:05:03.070 "zone_management": false, 00:05:03.070 "zone_append": false, 00:05:03.070 "compare": false, 00:05:03.070 "compare_and_write": false, 00:05:03.070 "abort": true, 00:05:03.070 "seek_hole": false, 00:05:03.070 "seek_data": false, 00:05:03.070 "copy": true, 00:05:03.070 "nvme_iov_md": false 00:05:03.070 }, 00:05:03.070 "memory_domains": [ 00:05:03.070 { 00:05:03.070 "dma_device_id": "system", 00:05:03.070 "dma_device_type": 1 00:05:03.070 }, 00:05:03.070 { 00:05:03.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.070 "dma_device_type": 2 00:05:03.070 } 00:05:03.070 ], 00:05:03.070 "driver_specific": {} 00:05:03.070 } 00:05:03.070 ]' 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.070 ************************************ 00:05:03.070 END TEST rpc_plugins 00:05:03.070 ************************************ 00:05:03.070 06:00:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.070 00:05:03.070 real 0m0.169s 00:05:03.070 user 0m0.100s 00:05:03.070 sys 0m0.027s 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.070 06:00:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.329 06:00:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.329 06:00:04 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.329 06:00:04 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.329 06:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.329 ************************************ 00:05:03.329 START TEST rpc_trace_cmd_test 00:05:03.329 ************************************ 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.329 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.329 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68039", 00:05:03.329 "tpoint_group_mask": "0x8", 00:05:03.329 "iscsi_conn": { 00:05:03.329 "mask": "0x2", 00:05:03.329 "tpoint_mask": "0x0" 00:05:03.329 }, 00:05:03.329 "scsi": { 00:05:03.329 "mask": "0x4", 00:05:03.329 "tpoint_mask": "0x0" 00:05:03.329 }, 00:05:03.329 "bdev": { 00:05:03.329 "mask": "0x8", 00:05:03.329 "tpoint_mask": "0xffffffffffffffff" 00:05:03.329 }, 00:05:03.329 "nvmf_rdma": { 00:05:03.329 "mask": "0x10", 00:05:03.329 "tpoint_mask": "0x0" 00:05:03.329 }, 00:05:03.329 "nvmf_tcp": { 00:05:03.329 "mask": "0x20", 00:05:03.329 "tpoint_mask": "0x0" 00:05:03.329 }, 00:05:03.329 "ftl": { 00:05:03.329 "mask": "0x40", 00:05:03.329 "tpoint_mask": "0x0" 00:05:03.329 }, 00:05:03.329 "blobfs": { 00:05:03.329 "mask": "0x80", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "dsa": { 00:05:03.330 "mask": "0x200", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "thread": { 00:05:03.330 "mask": "0x400", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "nvme_pcie": { 00:05:03.330 "mask": "0x800", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "iaa": { 00:05:03.330 "mask": "0x1000", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "nvme_tcp": { 00:05:03.330 "mask": "0x2000", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "bdev_nvme": { 00:05:03.330 "mask": "0x4000", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 }, 00:05:03.330 "sock": { 00:05:03.330 "mask": "0x8000", 00:05:03.330 "tpoint_mask": "0x0" 00:05:03.330 } 00:05:03.330 }' 00:05:03.330 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.330 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:03.330 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.330 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.330 06:00:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.330 06:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.330 06:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.330 06:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.330 06:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.591 ************************************ 00:05:03.591 END TEST rpc_trace_cmd_test 00:05:03.591 ************************************ 00:05:03.591 06:00:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.591 00:05:03.591 real 0m0.244s 00:05:03.591 user 0m0.191s 00:05:03.591 sys 0m0.044s 00:05:03.591 06:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.591 06:00:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 06:00:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.591 06:00:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.591 06:00:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.591 06:00:05 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.591 06:00:05 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.591 06:00:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 ************************************ 00:05:03.591 START TEST rpc_daemon_integrity 00:05:03.591 ************************************ 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.591 { 00:05:03.591 "name": "Malloc2", 00:05:03.591 "aliases": [ 00:05:03.591 "c44a8db4-d518-4641-8a9c-2520a793e604" 00:05:03.591 ], 00:05:03.591 "product_name": "Malloc disk", 00:05:03.591 "block_size": 512, 00:05:03.591 "num_blocks": 16384, 00:05:03.591 "uuid": "c44a8db4-d518-4641-8a9c-2520a793e604", 00:05:03.591 "assigned_rate_limits": { 00:05:03.591 "rw_ios_per_sec": 0, 00:05:03.591 "rw_mbytes_per_sec": 0, 00:05:03.591 "r_mbytes_per_sec": 0, 00:05:03.591 "w_mbytes_per_sec": 0 00:05:03.591 }, 00:05:03.591 "claimed": false, 00:05:03.591 "zoned": false, 00:05:03.591 "supported_io_types": { 00:05:03.591 "read": true, 00:05:03.591 "write": true, 00:05:03.591 "unmap": true, 00:05:03.591 "flush": true, 00:05:03.591 "reset": true, 00:05:03.591 "nvme_admin": false, 00:05:03.591 "nvme_io": false, 00:05:03.591 "nvme_io_md": false, 00:05:03.591 "write_zeroes": true, 00:05:03.591 "zcopy": true, 00:05:03.591 "get_zone_info": false, 00:05:03.591 "zone_management": false, 00:05:03.591 "zone_append": false, 00:05:03.591 "compare": false, 00:05:03.591 "compare_and_write": false, 00:05:03.591 "abort": true, 00:05:03.591 "seek_hole": false, 00:05:03.591 "seek_data": false, 00:05:03.591 "copy": true, 00:05:03.591 "nvme_iov_md": false 00:05:03.591 }, 00:05:03.591 "memory_domains": [ 00:05:03.591 { 00:05:03.591 "dma_device_id": "system", 00:05:03.591 "dma_device_type": 1 00:05:03.591 }, 00:05:03.591 { 00:05:03.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.591 "dma_device_type": 2 00:05:03.591 } 00:05:03.591 ], 00:05:03.591 "driver_specific": {} 00:05:03.591 } 00:05:03.591 ]' 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 [2024-08-13 06:00:05.340957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.591 [2024-08-13 06:00:05.341059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.591 [2024-08-13 06:00:05.341089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:03.591 [2024-08-13 06:00:05.341105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.591 [2024-08-13 06:00:05.343800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.591 [2024-08-13 06:00:05.343846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.591 Passthru0 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.591 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.591 { 00:05:03.591 "name": "Malloc2", 00:05:03.591 "aliases": [ 00:05:03.591 "c44a8db4-d518-4641-8a9c-2520a793e604" 00:05:03.591 ], 00:05:03.591 "product_name": "Malloc disk", 00:05:03.591 "block_size": 512, 00:05:03.591 "num_blocks": 16384, 00:05:03.591 "uuid": "c44a8db4-d518-4641-8a9c-2520a793e604", 00:05:03.592 "assigned_rate_limits": { 00:05:03.592 "rw_ios_per_sec": 0, 00:05:03.592 "rw_mbytes_per_sec": 0, 00:05:03.592 "r_mbytes_per_sec": 0, 00:05:03.592 "w_mbytes_per_sec": 0 00:05:03.592 }, 00:05:03.592 "claimed": true, 00:05:03.592 "claim_type": "exclusive_write", 00:05:03.592 "zoned": false, 00:05:03.592 "supported_io_types": { 00:05:03.592 "read": true, 00:05:03.592 "write": true, 00:05:03.592 "unmap": true, 00:05:03.592 "flush": true, 00:05:03.592 "reset": true, 00:05:03.592 "nvme_admin": false, 00:05:03.592 "nvme_io": false, 00:05:03.592 "nvme_io_md": false, 00:05:03.592 "write_zeroes": true, 00:05:03.592 "zcopy": true, 00:05:03.592 "get_zone_info": false, 00:05:03.592 "zone_management": false, 00:05:03.592 "zone_append": false, 00:05:03.592 "compare": false, 00:05:03.592 "compare_and_write": false, 00:05:03.592 "abort": true, 00:05:03.592 "seek_hole": false, 00:05:03.592 "seek_data": false, 00:05:03.592 "copy": true, 00:05:03.592 "nvme_iov_md": false 00:05:03.592 }, 00:05:03.592 "memory_domains": [ 00:05:03.592 { 00:05:03.592 "dma_device_id": "system", 00:05:03.592 "dma_device_type": 1 00:05:03.592 }, 00:05:03.592 { 00:05:03.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.592 "dma_device_type": 2 00:05:03.592 } 00:05:03.592 ], 00:05:03.592 "driver_specific": {} 00:05:03.592 }, 00:05:03.592 { 00:05:03.592 "name": "Passthru0", 00:05:03.592 "aliases": [ 00:05:03.592 "575174a9-e3a6-5876-b982-3ee941bc5dbd" 00:05:03.592 ], 00:05:03.592 "product_name": "passthru", 00:05:03.592 "block_size": 512, 00:05:03.592 "num_blocks": 16384, 00:05:03.592 "uuid": "575174a9-e3a6-5876-b982-3ee941bc5dbd", 00:05:03.592 "assigned_rate_limits": { 00:05:03.592 "rw_ios_per_sec": 0, 00:05:03.592 "rw_mbytes_per_sec": 0, 00:05:03.592 "r_mbytes_per_sec": 0, 00:05:03.592 "w_mbytes_per_sec": 0 00:05:03.592 }, 00:05:03.592 "claimed": false, 00:05:03.592 "zoned": false, 00:05:03.592 "supported_io_types": { 00:05:03.592 "read": true, 00:05:03.592 "write": true, 00:05:03.592 "unmap": true, 00:05:03.592 "flush": true, 00:05:03.592 "reset": true, 00:05:03.592 "nvme_admin": false, 00:05:03.592 "nvme_io": false, 00:05:03.592 "nvme_io_md": false, 00:05:03.592 "write_zeroes": true, 00:05:03.592 "zcopy": true, 00:05:03.592 "get_zone_info": false, 00:05:03.592 "zone_management": false, 00:05:03.592 "zone_append": false, 00:05:03.592 "compare": false, 00:05:03.592 "compare_and_write": false, 00:05:03.592 "abort": true, 00:05:03.592 "seek_hole": false, 00:05:03.592 "seek_data": false, 00:05:03.592 "copy": true, 00:05:03.592 "nvme_iov_md": false 00:05:03.592 }, 00:05:03.592 "memory_domains": [ 00:05:03.592 { 00:05:03.592 "dma_device_id": "system", 00:05:03.592 "dma_device_type": 1 00:05:03.592 }, 00:05:03.592 { 00:05:03.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.592 "dma_device_type": 2 00:05:03.592 } 00:05:03.592 ], 00:05:03.592 "driver_specific": { 00:05:03.592 "passthru": { 00:05:03.592 "name": "Passthru0", 00:05:03.592 "base_bdev_name": "Malloc2" 00:05:03.592 } 00:05:03.592 } 00:05:03.592 } 00:05:03.592 ]' 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.852 ************************************ 00:05:03.852 END TEST rpc_daemon_integrity 00:05:03.852 ************************************ 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.852 00:05:03.852 real 0m0.332s 00:05:03.852 user 0m0.206s 00:05:03.852 sys 0m0.055s 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.852 06:00:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.852 06:00:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.852 06:00:05 rpc -- rpc/rpc.sh@84 -- # killprocess 68039 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@946 -- # '[' -z 68039 ']' 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@950 -- # kill -0 68039 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@951 -- # uname 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68039 00:05:03.852 killing process with pid 68039 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68039' 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@965 -- # kill 68039 00:05:03.852 06:00:05 rpc -- common/autotest_common.sh@970 -- # wait 68039 00:05:04.421 00:05:04.421 real 0m2.778s 00:05:04.421 user 0m3.398s 00:05:04.421 sys 0m0.828s 00:05:04.421 ************************************ 00:05:04.421 END TEST rpc 00:05:04.421 ************************************ 00:05:04.421 06:00:06 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.421 06:00:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.421 06:00:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.421 06:00:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.421 06:00:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.421 06:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.421 ************************************ 00:05:04.421 START TEST skip_rpc 00:05:04.421 ************************************ 00:05:04.421 06:00:06 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.421 * Looking for test storage... 00:05:04.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.421 06:00:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.421 06:00:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.421 06:00:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.421 06:00:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.421 06:00:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.421 06:00:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.421 ************************************ 00:05:04.421 START TEST skip_rpc 00:05:04.421 ************************************ 00:05:04.421 06:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:04.421 06:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=68233 00:05:04.421 06:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.421 06:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.421 06:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.680 [2024-08-13 06:00:06.293360] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:04.680 [2024-08-13 06:00:06.293986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68233 ] 00:05:04.680 [2024-08-13 06:00:06.440290] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.968 [2024-08-13 06:00:06.490154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@646 -- # local es=0 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # rpc_cmd spdk_get_version 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # es=1 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 68233 00:05:10.249 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 68233 ']' 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 68233 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68233 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68233' 00:05:10.250 killing process with pid 68233 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 68233 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 68233 00:05:10.250 00:05:10.250 real 0m5.444s 00:05:10.250 user 0m5.064s 00:05:10.250 sys 0m0.301s 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.250 ************************************ 00:05:10.250 END TEST skip_rpc 00:05:10.250 ************************************ 00:05:10.250 06:00:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.250 06:00:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.250 06:00:11 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.250 06:00:11 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.250 06:00:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.250 ************************************ 00:05:10.250 START TEST skip_rpc_with_json 00:05:10.250 ************************************ 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=68320 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 68320 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 68320 ']' 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:10.250 06:00:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.250 [2024-08-13 06:00:11.792279] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:10.250 [2024-08-13 06:00:11.792510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68320 ] 00:05:10.250 [2024-08-13 06:00:11.935957] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.250 [2024-08-13 06:00:11.985561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.189 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.189 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:11.189 06:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.189 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:11.189 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.189 [2024-08-13 06:00:12.623467] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.189 request: 00:05:11.189 { 00:05:11.190 "trtype": "tcp", 00:05:11.190 "method": "nvmf_get_transports", 00:05:11.190 "req_id": 1 00:05:11.190 } 00:05:11.190 Got JSON-RPC error response 00:05:11.190 response: 00:05:11.190 { 00:05:11.190 "code": -19, 00:05:11.190 "message": "No such device" 00:05:11.190 } 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.190 [2024-08-13 06:00:12.635554] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:11.190 06:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.190 { 00:05:11.190 "subsystems": [ 00:05:11.190 { 00:05:11.190 "subsystem": "fsdev", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "fsdev_set_opts", 00:05:11.190 "params": { 00:05:11.190 "fsdev_io_pool_size": 65535, 00:05:11.190 "fsdev_io_cache_size": 256 00:05:11.190 } 00:05:11.190 } 00:05:11.190 ] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "keyring", 00:05:11.190 "config": [] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "iobuf", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "iobuf_set_options", 00:05:11.190 "params": { 00:05:11.190 "small_pool_count": 8192, 00:05:11.190 "large_pool_count": 1024, 00:05:11.190 "small_bufsize": 8192, 00:05:11.190 "large_bufsize": 135168 00:05:11.190 } 00:05:11.190 } 00:05:11.190 ] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "sock", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "sock_set_default_impl", 00:05:11.190 "params": { 00:05:11.190 "impl_name": "posix" 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "sock_impl_set_options", 00:05:11.190 "params": { 00:05:11.190 "impl_name": "ssl", 00:05:11.190 "recv_buf_size": 4096, 00:05:11.190 "send_buf_size": 4096, 00:05:11.190 "enable_recv_pipe": true, 00:05:11.190 "enable_quickack": false, 00:05:11.190 "enable_placement_id": 0, 00:05:11.190 "enable_zerocopy_send_server": true, 00:05:11.190 "enable_zerocopy_send_client": false, 00:05:11.190 "zerocopy_threshold": 0, 00:05:11.190 "tls_version": 0, 00:05:11.190 "enable_ktls": false 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "sock_impl_set_options", 00:05:11.190 "params": { 00:05:11.190 "impl_name": "posix", 00:05:11.190 "recv_buf_size": 2097152, 00:05:11.190 "send_buf_size": 2097152, 00:05:11.190 "enable_recv_pipe": true, 00:05:11.190 "enable_quickack": false, 00:05:11.190 "enable_placement_id": 0, 00:05:11.190 "enable_zerocopy_send_server": true, 00:05:11.190 "enable_zerocopy_send_client": false, 00:05:11.190 "zerocopy_threshold": 0, 00:05:11.190 "tls_version": 0, 00:05:11.190 "enable_ktls": false 00:05:11.190 } 00:05:11.190 } 00:05:11.190 ] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "vmd", 00:05:11.190 "config": [] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "accel", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "accel_set_options", 00:05:11.190 "params": { 00:05:11.190 "small_cache_size": 128, 00:05:11.190 "large_cache_size": 16, 00:05:11.190 "task_count": 2048, 00:05:11.190 "sequence_count": 2048, 00:05:11.190 "buf_count": 2048 00:05:11.190 } 00:05:11.190 } 00:05:11.190 ] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "bdev", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "bdev_set_options", 00:05:11.190 "params": { 00:05:11.190 "bdev_io_pool_size": 65535, 00:05:11.190 "bdev_io_cache_size": 256, 00:05:11.190 "bdev_auto_examine": true, 00:05:11.190 "iobuf_small_cache_size": 128, 00:05:11.190 "iobuf_large_cache_size": 16 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "bdev_raid_set_options", 00:05:11.190 "params": { 00:05:11.190 "process_window_size_kb": 1024, 00:05:11.190 "process_max_bandwidth_mb_sec": 0 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "bdev_iscsi_set_options", 00:05:11.190 "params": { 00:05:11.190 "timeout_sec": 30 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "bdev_nvme_set_options", 00:05:11.190 "params": { 00:05:11.190 "action_on_timeout": "none", 00:05:11.190 "timeout_us": 0, 00:05:11.190 "timeout_admin_us": 0, 00:05:11.190 "keep_alive_timeout_ms": 10000, 00:05:11.190 "arbitration_burst": 0, 00:05:11.190 "low_priority_weight": 0, 00:05:11.190 "medium_priority_weight": 0, 00:05:11.190 "high_priority_weight": 0, 00:05:11.190 "nvme_adminq_poll_period_us": 10000, 00:05:11.190 "nvme_ioq_poll_period_us": 0, 00:05:11.190 "io_queue_requests": 0, 00:05:11.190 "delay_cmd_submit": true, 00:05:11.190 "transport_retry_count": 4, 00:05:11.190 "bdev_retry_count": 3, 00:05:11.190 "transport_ack_timeout": 0, 00:05:11.190 "ctrlr_loss_timeout_sec": 0, 00:05:11.190 "reconnect_delay_sec": 0, 00:05:11.190 "fast_io_fail_timeout_sec": 0, 00:05:11.190 "disable_auto_failback": false, 00:05:11.190 "generate_uuids": false, 00:05:11.190 "transport_tos": 0, 00:05:11.190 "nvme_error_stat": false, 00:05:11.190 "rdma_srq_size": 0, 00:05:11.190 "io_path_stat": false, 00:05:11.190 "allow_accel_sequence": false, 00:05:11.190 "rdma_max_cq_size": 0, 00:05:11.190 "rdma_cm_event_timeout_ms": 0, 00:05:11.190 "dhchap_digests": [ 00:05:11.190 "sha256", 00:05:11.190 "sha384", 00:05:11.190 "sha512" 00:05:11.190 ], 00:05:11.190 "dhchap_dhgroups": [ 00:05:11.190 "null", 00:05:11.190 "ffdhe2048", 00:05:11.190 "ffdhe3072", 00:05:11.190 "ffdhe4096", 00:05:11.190 "ffdhe6144", 00:05:11.190 "ffdhe8192" 00:05:11.190 ] 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "bdev_nvme_set_hotplug", 00:05:11.190 "params": { 00:05:11.190 "period_us": 100000, 00:05:11.190 "enable": false 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "bdev_wait_for_examine" 00:05:11.190 } 00:05:11.190 ] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "scsi", 00:05:11.190 "config": null 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "scheduler", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "framework_set_scheduler", 00:05:11.190 "params": { 00:05:11.190 "name": "static" 00:05:11.190 } 00:05:11.190 } 00:05:11.190 ] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "vhost_scsi", 00:05:11.190 "config": [] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "vhost_blk", 00:05:11.190 "config": [] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "ublk", 00:05:11.190 "config": [] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "nbd", 00:05:11.190 "config": [] 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "subsystem": "nvmf", 00:05:11.190 "config": [ 00:05:11.190 { 00:05:11.190 "method": "nvmf_set_config", 00:05:11.190 "params": { 00:05:11.190 "discovery_filter": "match_any", 00:05:11.190 "admin_cmd_passthru": { 00:05:11.190 "identify_ctrlr": false 00:05:11.190 } 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "nvmf_set_max_subsystems", 00:05:11.190 "params": { 00:05:11.190 "max_subsystems": 1024 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "nvmf_set_crdt", 00:05:11.190 "params": { 00:05:11.190 "crdt1": 0, 00:05:11.190 "crdt2": 0, 00:05:11.190 "crdt3": 0 00:05:11.190 } 00:05:11.190 }, 00:05:11.190 { 00:05:11.190 "method": "nvmf_create_transport", 00:05:11.190 "params": { 00:05:11.190 "trtype": "TCP", 00:05:11.190 "max_queue_depth": 128, 00:05:11.190 "max_io_qpairs_per_ctrlr": 127, 00:05:11.190 "in_capsule_data_size": 4096, 00:05:11.190 "max_io_size": 131072, 00:05:11.190 "io_unit_size": 131072, 00:05:11.190 "max_aq_depth": 128, 00:05:11.190 "num_shared_buffers": 511, 00:05:11.190 "buf_cache_size": 4294967295, 00:05:11.190 "dif_insert_or_strip": false, 00:05:11.190 "zcopy": false, 00:05:11.190 "c2h_success": true, 00:05:11.190 "sock_priority": 0, 00:05:11.190 "abort_timeout_sec": 1, 00:05:11.190 "ack_timeout": 0, 00:05:11.190 "data_wr_pool_size": 0 00:05:11.190 } 00:05:11.190 } 00:05:11.190 ] 00:05:11.191 }, 00:05:11.191 { 00:05:11.191 "subsystem": "iscsi", 00:05:11.191 "config": [ 00:05:11.191 { 00:05:11.191 "method": "iscsi_set_options", 00:05:11.191 "params": { 00:05:11.191 "node_base": "iqn.2016-06.io.spdk", 00:05:11.191 "max_sessions": 128, 00:05:11.191 "max_connections_per_session": 2, 00:05:11.191 "max_queue_depth": 64, 00:05:11.191 "default_time2wait": 2, 00:05:11.191 "default_time2retain": 20, 00:05:11.191 "first_burst_length": 8192, 00:05:11.191 "immediate_data": true, 00:05:11.191 "allow_duplicated_isid": false, 00:05:11.191 "error_recovery_level": 0, 00:05:11.191 "nop_timeout": 60, 00:05:11.191 "nop_in_interval": 30, 00:05:11.191 "disable_chap": false, 00:05:11.191 "require_chap": false, 00:05:11.191 "mutual_chap": false, 00:05:11.191 "chap_group": 0, 00:05:11.191 "max_large_datain_per_connection": 64, 00:05:11.191 "max_r2t_per_connection": 4, 00:05:11.191 "pdu_pool_size": 36864, 00:05:11.191 "immediate_data_pool_size": 16384, 00:05:11.191 "data_out_pool_size": 2048 00:05:11.191 } 00:05:11.191 } 00:05:11.191 ] 00:05:11.191 } 00:05:11.191 ] 00:05:11.191 } 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 68320 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 68320 ']' 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 68320 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68320 00:05:11.191 killing process with pid 68320 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68320' 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 68320 00:05:11.191 06:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 68320 00:05:11.450 06:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=68349 00:05:11.450 06:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.450 06:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 68349 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 68349 ']' 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 68349 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68349 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.729 killing process with pid 68349 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68349' 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 68349 00:05:16.729 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 68349 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.989 00:05:16.989 real 0m6.940s 00:05:16.989 user 0m6.506s 00:05:16.989 sys 0m0.731s 00:05:16.989 ************************************ 00:05:16.989 END TEST skip_rpc_with_json 00:05:16.989 ************************************ 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 06:00:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.989 06:00:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.989 06:00:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.989 06:00:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 ************************************ 00:05:16.989 START TEST skip_rpc_with_delay 00:05:16.989 ************************************ 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # local es=0 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:16.989 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.249 [2024-08-13 06:00:18.804268] app.c: 833:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.249 [2024-08-13 06:00:18.804514] app.c: 712:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:17.249 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # es=1 00:05:17.249 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:17.249 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:17.249 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:17.249 00:05:17.249 real 0m0.160s 00:05:17.249 user 0m0.084s 00:05:17.249 sys 0m0.074s 00:05:17.249 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.249 06:00:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.249 ************************************ 00:05:17.249 END TEST skip_rpc_with_delay 00:05:17.249 ************************************ 00:05:17.249 06:00:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.249 06:00:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.249 06:00:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.249 06:00:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.249 06:00:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.249 06:00:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.249 ************************************ 00:05:17.249 START TEST exit_on_failed_rpc_init 00:05:17.249 ************************************ 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=68455 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 68455 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 68455 ']' 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.249 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.250 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.250 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.250 06:00:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.250 [2024-08-13 06:00:19.024603] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:17.250 [2024-08-13 06:00:19.025182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68455 ] 00:05:17.510 [2024-08-13 06:00:19.173265] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.510 [2024-08-13 06:00:19.219222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # local es=0 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:18.080 06:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.339 [2024-08-13 06:00:19.955805] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:18.339 [2024-08-13 06:00:19.956024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68473 ] 00:05:18.339 [2024-08-13 06:00:20.085389] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.598 [2024-08-13 06:00:20.156387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.598 [2024-08-13 06:00:20.156631] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.598 [2024-08-13 06:00:20.156659] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.598 [2024-08-13 06:00:20.156724] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # es=234 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # es=106 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # case "$es" in 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # es=1 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 68455 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 68455 ']' 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 68455 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68455 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68455' 00:05:18.598 killing process with pid 68455 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 68455 00:05:18.598 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 68455 00:05:19.168 00:05:19.168 real 0m1.779s 00:05:19.168 user 0m1.919s 00:05:19.168 sys 0m0.516s 00:05:19.168 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.168 06:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.168 ************************************ 00:05:19.168 END TEST exit_on_failed_rpc_init 00:05:19.168 ************************************ 00:05:19.168 06:00:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.168 00:05:19.168 real 0m14.704s 00:05:19.168 user 0m13.705s 00:05:19.168 sys 0m1.881s 00:05:19.168 06:00:20 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.168 06:00:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.168 ************************************ 00:05:19.168 END TEST skip_rpc 00:05:19.168 ************************************ 00:05:19.168 06:00:20 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.168 06:00:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.168 06:00:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.168 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.168 ************************************ 00:05:19.168 START TEST rpc_client 00:05:19.168 ************************************ 00:05:19.168 06:00:20 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.168 * Looking for test storage... 00:05:19.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:19.168 06:00:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:19.428 OK 00:05:19.428 06:00:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.428 00:05:19.428 real 0m0.190s 00:05:19.428 user 0m0.091s 00:05:19.428 sys 0m0.109s 00:05:19.428 06:00:21 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.428 ************************************ 00:05:19.428 END TEST rpc_client 00:05:19.428 ************************************ 00:05:19.428 06:00:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.428 06:00:21 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.428 06:00:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.428 06:00:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.428 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.428 ************************************ 00:05:19.428 START TEST json_config 00:05:19.428 ************************************ 00:05:19.428 06:00:21 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.428 06:00:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.428 06:00:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:23d47d08-ae40-4abe-a772-375a792e023f 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=23d47d08-ae40-4abe-a772-375a792e023f 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.429 06:00:21 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.429 06:00:21 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.429 06:00:21 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.429 06:00:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.429 06:00:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.429 06:00:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.429 06:00:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.429 06:00:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@47 -- # : 0 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.429 06:00:21 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:19.429 WARNING: No tests are enabled so not running JSON configuration tests 00:05:19.429 06:00:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:19.429 00:05:19.429 real 0m0.125s 00:05:19.429 user 0m0.060s 00:05:19.429 sys 0m0.062s 00:05:19.429 06:00:21 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.429 ************************************ 00:05:19.429 END TEST json_config 00:05:19.429 ************************************ 00:05:19.429 06:00:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.689 06:00:21 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.689 06:00:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.689 06:00:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.689 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.689 ************************************ 00:05:19.689 START TEST json_config_extra_key 00:05:19.689 ************************************ 00:05:19.689 06:00:21 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:23d47d08-ae40-4abe-a772-375a792e023f 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=23d47d08-ae40-4abe-a772-375a792e023f 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.689 06:00:21 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.689 06:00:21 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.689 06:00:21 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.689 06:00:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.689 06:00:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.689 06:00:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.689 06:00:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.689 06:00:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.689 06:00:21 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.689 INFO: launching applications... 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.689 06:00:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=68637 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.689 Waiting for target to run... 00:05:19.689 06:00:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 68637 /var/tmp/spdk_tgt.sock 00:05:19.689 06:00:21 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 68637 ']' 00:05:19.689 06:00:21 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.689 06:00:21 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:19.689 06:00:21 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.690 06:00:21 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:19.690 06:00:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.690 06:00:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.690 [2024-08-13 06:00:21.477700] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:19.690 [2024-08-13 06:00:21.477905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68637 ] 00:05:20.259 [2024-08-13 06:00:21.828950] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.259 [2024-08-13 06:00:21.861449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.827 00:05:20.827 INFO: shutting down applications... 00:05:20.827 06:00:22 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:20.827 06:00:22 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.827 06:00:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.827 06:00:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 68637 ]] 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 68637 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 68637 00:05:20.827 06:00:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 68637 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.086 06:00:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.086 SPDK target shutdown done 00:05:21.086 06:00:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.086 Success 00:05:21.086 00:05:21.086 real 0m1.544s 00:05:21.086 user 0m1.337s 00:05:21.086 sys 0m0.414s 00:05:21.086 06:00:22 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.086 06:00:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.086 ************************************ 00:05:21.086 END TEST json_config_extra_key 00:05:21.086 ************************************ 00:05:21.086 06:00:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.086 06:00:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.086 06:00:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.086 06:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.348 ************************************ 00:05:21.348 START TEST alias_rpc 00:05:21.348 ************************************ 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.348 * Looking for test storage... 00:05:21.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.348 06:00:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.348 06:00:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68697 00:05:21.348 06:00:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.348 06:00:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68697 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 68697 ']' 00:05:21.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.348 06:00:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.348 [2024-08-13 06:00:23.083099] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:21.348 [2024-08-13 06:00:23.083235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68697 ] 00:05:21.607 [2024-08-13 06:00:23.226495] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.607 [2024-08-13 06:00:23.274354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.180 06:00:23 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.180 06:00:23 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:22.180 06:00:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:22.449 06:00:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68697 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 68697 ']' 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 68697 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68697 00:05:22.449 killing process with pid 68697 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68697' 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@965 -- # kill 68697 00:05:22.449 06:00:24 alias_rpc -- common/autotest_common.sh@970 -- # wait 68697 00:05:23.017 00:05:23.017 real 0m1.622s 00:05:23.017 user 0m1.667s 00:05:23.017 sys 0m0.435s 00:05:23.017 ************************************ 00:05:23.017 END TEST alias_rpc 00:05:23.017 ************************************ 00:05:23.017 06:00:24 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.017 06:00:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.017 06:00:24 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:23.017 06:00:24 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.017 06:00:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.017 06:00:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.017 06:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.017 ************************************ 00:05:23.017 START TEST spdkcli_tcp 00:05:23.017 ************************************ 00:05:23.017 06:00:24 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.017 * Looking for test storage... 00:05:23.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:23.017 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:23.017 06:00:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:23.017 06:00:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:23.017 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.018 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.018 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.018 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.018 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=68774 00:05:23.018 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.018 06:00:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 68774 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 68774 ']' 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.018 06:00:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.018 [2024-08-13 06:00:24.777862] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:23.018 [2024-08-13 06:00:24.778093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68774 ] 00:05:23.277 [2024-08-13 06:00:24.911109] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.277 [2024-08-13 06:00:24.960920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.277 [2024-08-13 06:00:24.961012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.845 06:00:25 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.845 06:00:25 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:23.845 06:00:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=68791 00:05:23.845 06:00:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.845 06:00:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.103 [ 00:05:24.103 "bdev_malloc_delete", 00:05:24.103 "bdev_malloc_create", 00:05:24.103 "bdev_null_resize", 00:05:24.103 "bdev_null_delete", 00:05:24.103 "bdev_null_create", 00:05:24.103 "bdev_nvme_cuse_unregister", 00:05:24.103 "bdev_nvme_cuse_register", 00:05:24.103 "bdev_opal_new_user", 00:05:24.103 "bdev_opal_set_lock_state", 00:05:24.103 "bdev_opal_delete", 00:05:24.103 "bdev_opal_get_info", 00:05:24.103 "bdev_opal_create", 00:05:24.103 "bdev_nvme_opal_revert", 00:05:24.103 "bdev_nvme_opal_init", 00:05:24.103 "bdev_nvme_send_cmd", 00:05:24.103 "bdev_nvme_get_path_iostat", 00:05:24.103 "bdev_nvme_get_mdns_discovery_info", 00:05:24.103 "bdev_nvme_stop_mdns_discovery", 00:05:24.103 "bdev_nvme_start_mdns_discovery", 00:05:24.103 "bdev_nvme_set_multipath_policy", 00:05:24.103 "bdev_nvme_set_preferred_path", 00:05:24.103 "bdev_nvme_get_io_paths", 00:05:24.103 "bdev_nvme_remove_error_injection", 00:05:24.103 "bdev_nvme_add_error_injection", 00:05:24.103 "bdev_nvme_get_discovery_info", 00:05:24.103 "bdev_nvme_stop_discovery", 00:05:24.103 "bdev_nvme_start_discovery", 00:05:24.103 "bdev_nvme_get_controller_health_info", 00:05:24.103 "bdev_nvme_disable_controller", 00:05:24.103 "bdev_nvme_enable_controller", 00:05:24.103 "bdev_nvme_reset_controller", 00:05:24.103 "bdev_nvme_get_transport_statistics", 00:05:24.103 "bdev_nvme_apply_firmware", 00:05:24.103 "bdev_nvme_detach_controller", 00:05:24.103 "bdev_nvme_get_controllers", 00:05:24.103 "bdev_nvme_attach_controller", 00:05:24.103 "bdev_nvme_set_hotplug", 00:05:24.103 "bdev_nvme_set_options", 00:05:24.103 "bdev_passthru_delete", 00:05:24.103 "bdev_passthru_create", 00:05:24.103 "bdev_lvol_set_parent_bdev", 00:05:24.103 "bdev_lvol_set_parent", 00:05:24.103 "bdev_lvol_check_shallow_copy", 00:05:24.103 "bdev_lvol_start_shallow_copy", 00:05:24.103 "bdev_lvol_grow_lvstore", 00:05:24.103 "bdev_lvol_get_lvols", 00:05:24.103 "bdev_lvol_get_lvstores", 00:05:24.103 "bdev_lvol_delete", 00:05:24.103 "bdev_lvol_set_read_only", 00:05:24.103 "bdev_lvol_resize", 00:05:24.103 "bdev_lvol_decouple_parent", 00:05:24.103 "bdev_lvol_inflate", 00:05:24.103 "bdev_lvol_rename", 00:05:24.103 "bdev_lvol_clone_bdev", 00:05:24.103 "bdev_lvol_clone", 00:05:24.103 "bdev_lvol_snapshot", 00:05:24.103 "bdev_lvol_create", 00:05:24.103 "bdev_lvol_delete_lvstore", 00:05:24.103 "bdev_lvol_rename_lvstore", 00:05:24.103 "bdev_lvol_create_lvstore", 00:05:24.103 "bdev_raid_set_options", 00:05:24.103 "bdev_raid_remove_base_bdev", 00:05:24.103 "bdev_raid_add_base_bdev", 00:05:24.103 "bdev_raid_delete", 00:05:24.103 "bdev_raid_create", 00:05:24.103 "bdev_raid_get_bdevs", 00:05:24.103 "bdev_error_inject_error", 00:05:24.103 "bdev_error_delete", 00:05:24.103 "bdev_error_create", 00:05:24.103 "bdev_split_delete", 00:05:24.103 "bdev_split_create", 00:05:24.103 "bdev_delay_delete", 00:05:24.103 "bdev_delay_create", 00:05:24.103 "bdev_delay_update_latency", 00:05:24.103 "bdev_zone_block_delete", 00:05:24.103 "bdev_zone_block_create", 00:05:24.103 "blobfs_create", 00:05:24.103 "blobfs_detect", 00:05:24.103 "blobfs_set_cache_size", 00:05:24.103 "bdev_aio_delete", 00:05:24.103 "bdev_aio_rescan", 00:05:24.103 "bdev_aio_create", 00:05:24.103 "bdev_ftl_set_property", 00:05:24.103 "bdev_ftl_get_properties", 00:05:24.103 "bdev_ftl_get_stats", 00:05:24.103 "bdev_ftl_unmap", 00:05:24.103 "bdev_ftl_unload", 00:05:24.103 "bdev_ftl_delete", 00:05:24.103 "bdev_ftl_load", 00:05:24.103 "bdev_ftl_create", 00:05:24.103 "bdev_virtio_attach_controller", 00:05:24.103 "bdev_virtio_scsi_get_devices", 00:05:24.103 "bdev_virtio_detach_controller", 00:05:24.103 "bdev_virtio_blk_set_hotplug", 00:05:24.103 "bdev_iscsi_delete", 00:05:24.103 "bdev_iscsi_create", 00:05:24.103 "bdev_iscsi_set_options", 00:05:24.103 "accel_error_inject_error", 00:05:24.103 "ioat_scan_accel_module", 00:05:24.103 "dsa_scan_accel_module", 00:05:24.103 "iaa_scan_accel_module", 00:05:24.103 "keyring_file_remove_key", 00:05:24.103 "keyring_file_add_key", 00:05:24.103 "keyring_linux_set_options", 00:05:24.103 "fsdev_aio_delete", 00:05:24.103 "fsdev_aio_create", 00:05:24.103 "iscsi_get_histogram", 00:05:24.103 "iscsi_enable_histogram", 00:05:24.103 "iscsi_set_options", 00:05:24.103 "iscsi_get_auth_groups", 00:05:24.103 "iscsi_auth_group_remove_secret", 00:05:24.103 "iscsi_auth_group_add_secret", 00:05:24.103 "iscsi_delete_auth_group", 00:05:24.103 "iscsi_create_auth_group", 00:05:24.103 "iscsi_set_discovery_auth", 00:05:24.103 "iscsi_get_options", 00:05:24.103 "iscsi_target_node_request_logout", 00:05:24.103 "iscsi_target_node_set_redirect", 00:05:24.103 "iscsi_target_node_set_auth", 00:05:24.103 "iscsi_target_node_add_lun", 00:05:24.103 "iscsi_get_stats", 00:05:24.103 "iscsi_get_connections", 00:05:24.103 "iscsi_portal_group_set_auth", 00:05:24.103 "iscsi_start_portal_group", 00:05:24.103 "iscsi_delete_portal_group", 00:05:24.103 "iscsi_create_portal_group", 00:05:24.103 "iscsi_get_portal_groups", 00:05:24.104 "iscsi_delete_target_node", 00:05:24.104 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.104 "iscsi_target_node_add_pg_ig_maps", 00:05:24.104 "iscsi_create_target_node", 00:05:24.104 "iscsi_get_target_nodes", 00:05:24.104 "iscsi_delete_initiator_group", 00:05:24.104 "iscsi_initiator_group_remove_initiators", 00:05:24.104 "iscsi_initiator_group_add_initiators", 00:05:24.104 "iscsi_create_initiator_group", 00:05:24.104 "iscsi_get_initiator_groups", 00:05:24.104 "nvmf_set_crdt", 00:05:24.104 "nvmf_set_config", 00:05:24.104 "nvmf_set_max_subsystems", 00:05:24.104 "nvmf_stop_mdns_prr", 00:05:24.104 "nvmf_publish_mdns_prr", 00:05:24.104 "nvmf_subsystem_get_listeners", 00:05:24.104 "nvmf_subsystem_get_qpairs", 00:05:24.104 "nvmf_subsystem_get_controllers", 00:05:24.104 "nvmf_get_stats", 00:05:24.104 "nvmf_get_transports", 00:05:24.104 "nvmf_create_transport", 00:05:24.104 "nvmf_get_targets", 00:05:24.104 "nvmf_delete_target", 00:05:24.104 "nvmf_create_target", 00:05:24.104 "nvmf_subsystem_allow_any_host", 00:05:24.104 "nvmf_subsystem_remove_host", 00:05:24.104 "nvmf_subsystem_add_host", 00:05:24.104 "nvmf_ns_remove_host", 00:05:24.104 "nvmf_ns_add_host", 00:05:24.104 "nvmf_subsystem_remove_ns", 00:05:24.104 "nvmf_subsystem_add_ns", 00:05:24.104 "nvmf_subsystem_listener_set_ana_state", 00:05:24.104 "nvmf_discovery_get_referrals", 00:05:24.104 "nvmf_discovery_remove_referral", 00:05:24.104 "nvmf_discovery_add_referral", 00:05:24.104 "nvmf_subsystem_remove_listener", 00:05:24.104 "nvmf_subsystem_add_listener", 00:05:24.104 "nvmf_delete_subsystem", 00:05:24.104 "nvmf_create_subsystem", 00:05:24.104 "nvmf_get_subsystems", 00:05:24.104 "env_dpdk_get_mem_stats", 00:05:24.104 "nbd_get_disks", 00:05:24.104 "nbd_stop_disk", 00:05:24.104 "nbd_start_disk", 00:05:24.104 "ublk_recover_disk", 00:05:24.104 "ublk_get_disks", 00:05:24.104 "ublk_stop_disk", 00:05:24.104 "ublk_start_disk", 00:05:24.104 "ublk_destroy_target", 00:05:24.104 "ublk_create_target", 00:05:24.104 "virtio_blk_create_transport", 00:05:24.104 "virtio_blk_get_transports", 00:05:24.104 "vhost_controller_set_coalescing", 00:05:24.104 "vhost_get_controllers", 00:05:24.104 "vhost_delete_controller", 00:05:24.104 "vhost_create_blk_controller", 00:05:24.104 "vhost_scsi_controller_remove_target", 00:05:24.104 "vhost_scsi_controller_add_target", 00:05:24.104 "vhost_start_scsi_controller", 00:05:24.104 "vhost_create_scsi_controller", 00:05:24.104 "thread_set_cpumask", 00:05:24.104 "framework_get_governor", 00:05:24.104 "framework_get_scheduler", 00:05:24.104 "framework_set_scheduler", 00:05:24.104 "framework_get_reactors", 00:05:24.104 "thread_get_io_channels", 00:05:24.104 "thread_get_pollers", 00:05:24.104 "thread_get_stats", 00:05:24.104 "framework_monitor_context_switch", 00:05:24.104 "spdk_kill_instance", 00:05:24.104 "log_enable_timestamps", 00:05:24.104 "log_get_flags", 00:05:24.104 "log_clear_flag", 00:05:24.104 "log_set_flag", 00:05:24.104 "log_get_level", 00:05:24.104 "log_set_level", 00:05:24.104 "log_get_print_level", 00:05:24.104 "log_set_print_level", 00:05:24.104 "framework_enable_cpumask_locks", 00:05:24.104 "framework_disable_cpumask_locks", 00:05:24.104 "framework_wait_init", 00:05:24.104 "framework_start_init", 00:05:24.104 "scsi_get_devices", 00:05:24.104 "bdev_get_histogram", 00:05:24.104 "bdev_enable_histogram", 00:05:24.104 "bdev_set_qos_limit", 00:05:24.104 "bdev_set_qd_sampling_period", 00:05:24.104 "bdev_get_bdevs", 00:05:24.104 "bdev_reset_iostat", 00:05:24.104 "bdev_get_iostat", 00:05:24.104 "bdev_examine", 00:05:24.104 "bdev_wait_for_examine", 00:05:24.104 "bdev_set_options", 00:05:24.104 "accel_get_stats", 00:05:24.104 "accel_set_options", 00:05:24.104 "accel_set_driver", 00:05:24.104 "accel_crypto_key_destroy", 00:05:24.104 "accel_crypto_keys_get", 00:05:24.104 "accel_crypto_key_create", 00:05:24.104 "accel_assign_opc", 00:05:24.104 "accel_get_module_info", 00:05:24.104 "accel_get_opc_assignments", 00:05:24.104 "vmd_rescan", 00:05:24.104 "vmd_remove_device", 00:05:24.104 "vmd_enable", 00:05:24.104 "sock_get_default_impl", 00:05:24.104 "sock_set_default_impl", 00:05:24.104 "sock_impl_set_options", 00:05:24.104 "sock_impl_get_options", 00:05:24.104 "iobuf_get_stats", 00:05:24.104 "iobuf_set_options", 00:05:24.104 "keyring_get_keys", 00:05:24.104 "framework_get_pci_devices", 00:05:24.104 "framework_get_config", 00:05:24.104 "framework_get_subsystems", 00:05:24.104 "fsdev_set_opts", 00:05:24.104 "fsdev_get_opts", 00:05:24.104 "trace_get_info", 00:05:24.104 "trace_get_tpoint_group_mask", 00:05:24.104 "trace_disable_tpoint_group", 00:05:24.104 "trace_enable_tpoint_group", 00:05:24.104 "trace_clear_tpoint_mask", 00:05:24.104 "trace_set_tpoint_mask", 00:05:24.104 "notify_get_notifications", 00:05:24.104 "notify_get_types", 00:05:24.104 "spdk_get_version", 00:05:24.104 "rpc_get_methods" 00:05:24.104 ] 00:05:24.104 06:00:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.104 06:00:25 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.104 06:00:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.104 06:00:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.104 06:00:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 68774 00:05:24.104 06:00:25 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 68774 ']' 00:05:24.104 06:00:25 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 68774 00:05:24.104 06:00:25 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68774 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68774' 00:05:24.363 killing process with pid 68774 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 68774 00:05:24.363 06:00:25 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 68774 00:05:24.623 00:05:24.623 real 0m1.766s 00:05:24.623 user 0m3.087s 00:05:24.623 sys 0m0.523s 00:05:24.623 06:00:26 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.623 ************************************ 00:05:24.623 END TEST spdkcli_tcp 00:05:24.623 ************************************ 00:05:24.623 06:00:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.623 06:00:26 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.623 06:00:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.623 06:00:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.623 06:00:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.623 ************************************ 00:05:24.623 START TEST dpdk_mem_utility 00:05:24.623 ************************************ 00:05:24.623 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.883 * Looking for test storage... 00:05:24.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.883 06:00:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.883 06:00:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68866 00:05:24.883 06:00:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.883 06:00:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68866 00:05:24.883 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 68866 ']' 00:05:24.883 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.883 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.883 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.883 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.883 06:00:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.883 [2024-08-13 06:00:26.613944] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:24.883 [2024-08-13 06:00:26.614201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68866 ] 00:05:25.141 [2024-08-13 06:00:26.761381] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.141 [2024-08-13 06:00:26.810612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.711 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.711 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:25.711 06:00:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.711 06:00:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.711 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:25.711 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.711 { 00:05:25.711 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.711 } 00:05:25.711 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:25.711 06:00:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.973 DPDK memory size 852.000000 MiB in 1 heap(s) 00:05:25.973 1 heaps totaling size 852.000000 MiB 00:05:25.973 size: 852.000000 MiB heap id: 0 00:05:25.973 end heaps---------- 00:05:25.973 9 mempools totaling size 634.625427 MiB 00:05:25.973 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.973 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.973 size: 84.521057 MiB name: bdev_io_68866 00:05:25.973 size: 51.011292 MiB name: evtpool_68866 00:05:25.973 size: 50.003479 MiB name: msgpool_68866 00:05:25.973 size: 36.509338 MiB name: fsdev_io_68866 00:05:25.973 size: 21.763794 MiB name: PDU_Pool 00:05:25.973 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.973 size: 0.026123 MiB name: Session_Pool 00:05:25.973 end mempools------- 00:05:25.973 6 memzones totaling size 4.142822 MiB 00:05:25.973 size: 1.000366 MiB name: RG_ring_0_68866 00:05:25.973 size: 1.000366 MiB name: RG_ring_1_68866 00:05:25.973 size: 1.000366 MiB name: RG_ring_4_68866 00:05:25.973 size: 1.000366 MiB name: RG_ring_5_68866 00:05:25.973 size: 0.125366 MiB name: RG_ring_2_68866 00:05:25.973 size: 0.015991 MiB name: RG_ring_3_68866 00:05:25.973 end memzones------- 00:05:25.973 06:00:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.973 heap id: 0 total size: 852.000000 MiB number of busy elements: 298 number of free elements: 16 00:05:25.973 list of free elements. size: 13.962585 MiB 00:05:25.973 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.973 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:25.973 element at address: 0x20001b400000 with size: 0.999878 MiB 00:05:25.973 element at address: 0x20001b600000 with size: 0.999878 MiB 00:05:25.973 element at address: 0x200034200000 with size: 0.994446 MiB 00:05:25.973 element at address: 0x200015e00000 with size: 0.978699 MiB 00:05:25.973 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:25.973 element at address: 0x20001b800000 with size: 0.936584 MiB 00:05:25.973 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:25.973 element at address: 0x20001d000000 with size: 0.568970 MiB 00:05:25.973 element at address: 0x20000d800000 with size: 0.489624 MiB 00:05:25.973 element at address: 0x200003e00000 with size: 0.488464 MiB 00:05:25.973 element at address: 0x20001ba00000 with size: 0.485657 MiB 00:05:25.973 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:25.973 element at address: 0x20002a400000 with size: 0.395752 MiB 00:05:25.973 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:25.973 list of standard malloc elements. size: 199.265137 MiB 00:05:25.973 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:25.973 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:25.973 element at address: 0x20001b4fff80 with size: 1.000122 MiB 00:05:25.973 element at address: 0x20001b6fff80 with size: 1.000122 MiB 00:05:25.973 element at address: 0x20001b8fff80 with size: 1.000122 MiB 00:05:25.973 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.973 element at address: 0x20001b8eff00 with size: 0.062622 MiB 00:05:25.973 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.973 element at address: 0x20001b8efdc0 with size: 0.000305 MiB 00:05:25.973 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.973 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:25.974 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x200015efa8c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001b8efc40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001b8efd00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001babc740 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091a80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091b40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091c00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091cc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091d80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091e40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091f00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d091fc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092080 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092140 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092200 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0922c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092380 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092440 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092500 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0925c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092680 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092740 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092800 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0928c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092980 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092a40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092b00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092bc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092c80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092d40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092e00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092ec0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d092f80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093040 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093100 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0931c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093280 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093340 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093400 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0934c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093580 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093640 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093700 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0937c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093880 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093940 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093a00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093ac0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093b80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093c40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093d00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093dc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093e80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d093f40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094000 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0940c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094180 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094240 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094300 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0943c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094480 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094540 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094600 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0946c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094780 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094840 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094900 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0949c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094a80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094b40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094c00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094cc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094d80 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094e40 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094f00 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d094fc0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d095080 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d095140 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d095200 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d0952c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d095380 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20001d095440 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a465500 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a4655c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c1c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c3c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c480 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c540 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c600 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c6c0 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c780 with size: 0.000183 MiB 00:05:25.974 element at address: 0x20002a46c840 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46c900 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46c9c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ca80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46cb40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46cc00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ccc0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46cd80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ce40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46cf00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46cfc0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d080 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d140 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d200 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d2c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d380 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d440 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d500 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d5c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d680 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d740 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d800 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d8c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46d980 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46da40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46db00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46dbc0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46dc80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46dd40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46de00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46dec0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46df80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e040 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e100 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e1c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e280 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e340 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e400 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e4c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e580 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e640 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e700 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e7c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e880 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46e940 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ea00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46eac0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46eb80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ec40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ed00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46edc0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ee80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ef40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f000 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f0c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f180 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f240 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f300 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f3c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f480 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f540 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f600 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f6c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f780 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f840 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f900 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46f9c0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46fa80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46fb40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46fc00 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46fcc0 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46fd80 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46fe40 with size: 0.000183 MiB 00:05:25.975 element at address: 0x20002a46ff00 with size: 0.000183 MiB 00:05:25.975 list of memzone associated elements. size: 638.772278 MiB 00:05:25.975 element at address: 0x20001d095500 with size: 211.416748 MiB 00:05:25.975 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.975 element at address: 0x20002a46ffc0 with size: 157.562561 MiB 00:05:25.975 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.975 element at address: 0x200015ffab80 with size: 84.020630 MiB 00:05:25.975 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68866_0 00:05:25.975 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.975 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68866_0 00:05:25.975 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.975 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68866_0 00:05:25.975 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:25.975 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_68866_0 00:05:25.975 element at address: 0x20001bbbe940 with size: 20.255554 MiB 00:05:25.975 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.975 element at address: 0x2000343feb40 with size: 18.005066 MiB 00:05:25.975 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.975 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.975 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68866 00:05:25.975 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.975 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68866 00:05:25.975 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.975 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68866 00:05:25.975 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:25.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.975 element at address: 0x20001babc800 with size: 1.008118 MiB 00:05:25.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.975 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:25.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.975 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:25.975 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.975 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.975 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68866 00:05:25.975 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.975 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68866 00:05:25.975 element at address: 0x200015efa980 with size: 1.000488 MiB 00:05:25.975 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68866 00:05:25.975 element at address: 0x2000342fe940 with size: 1.000488 MiB 00:05:25.975 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68866 00:05:25.975 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:25.975 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_68866 00:05:25.975 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:25.975 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68866 00:05:25.975 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:25.975 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.975 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:25.975 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.975 element at address: 0x20001ba7c540 with size: 0.250488 MiB 00:05:25.975 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.975 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:05:25.975 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68866 00:05:25.975 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:25.975 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.975 element at address: 0x20002a465680 with size: 0.023743 MiB 00:05:25.975 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.975 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:05:25.975 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68866 00:05:25.975 element at address: 0x20002a46b7c0 with size: 0.002441 MiB 00:05:25.975 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.975 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:25.975 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68866 00:05:25.975 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:25.975 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_68866 00:05:25.975 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:05:25.975 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68866 00:05:25.975 element at address: 0x20002a46c280 with size: 0.000305 MiB 00:05:25.975 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.975 06:00:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.975 06:00:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68866 00:05:25.975 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 68866 ']' 00:05:25.975 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 68866 00:05:25.975 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:25.975 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.975 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68866 00:05:25.975 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.976 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.976 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68866' 00:05:25.976 killing process with pid 68866 00:05:25.976 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 68866 00:05:25.976 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 68866 00:05:26.235 00:05:26.235 real 0m1.580s 00:05:26.235 user 0m1.566s 00:05:26.235 sys 0m0.454s 00:05:26.235 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.235 06:00:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.235 ************************************ 00:05:26.235 END TEST dpdk_mem_utility 00:05:26.235 ************************************ 00:05:26.495 06:00:28 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.495 06:00:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.495 06:00:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.495 06:00:28 -- common/autotest_common.sh@10 -- # set +x 00:05:26.495 ************************************ 00:05:26.495 START TEST event 00:05:26.495 ************************************ 00:05:26.495 06:00:28 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.495 * Looking for test storage... 00:05:26.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.495 06:00:28 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:26.495 06:00:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.495 06:00:28 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.495 06:00:28 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:26.495 06:00:28 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.495 06:00:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.495 ************************************ 00:05:26.495 START TEST event_perf 00:05:26.495 ************************************ 00:05:26.495 06:00:28 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.495 Running I/O for 1 seconds...[2024-08-13 06:00:28.209110] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:26.495 [2024-08-13 06:00:28.209284] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68933 ] 00:05:26.754 [2024-08-13 06:00:28.353876] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.754 [2024-08-13 06:00:28.404151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.754 Running I/O for 1 seconds...[2024-08-13 06:00:28.403997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.755 [2024-08-13 06:00:28.404118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.755 [2024-08-13 06:00:28.404304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.693 00:05:27.693 lcore 0: 200122 00:05:27.693 lcore 1: 200122 00:05:27.693 lcore 2: 200123 00:05:27.693 lcore 3: 200122 00:05:27.954 done. 00:05:27.954 00:05:27.954 real 0m1.330s 00:05:27.954 user 0m4.096s 00:05:27.954 sys 0m0.113s 00:05:27.954 06:00:29 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.954 06:00:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.954 ************************************ 00:05:27.954 END TEST event_perf 00:05:27.954 ************************************ 00:05:27.954 06:00:29 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.954 06:00:29 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:27.954 06:00:29 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.954 06:00:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.954 ************************************ 00:05:27.954 START TEST event_reactor 00:05:27.954 ************************************ 00:05:27.954 06:00:29 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.954 [2024-08-13 06:00:29.606396] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:27.954 [2024-08-13 06:00:29.606584] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68978 ] 00:05:28.216 [2024-08-13 06:00:29.749876] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.216 [2024-08-13 06:00:29.799854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.156 test_start 00:05:29.156 oneshot 00:05:29.156 tick 100 00:05:29.156 tick 100 00:05:29.156 tick 250 00:05:29.156 tick 100 00:05:29.156 tick 100 00:05:29.156 tick 100 00:05:29.156 tick 250 00:05:29.156 tick 500 00:05:29.156 tick 100 00:05:29.156 tick 100 00:05:29.156 tick 250 00:05:29.156 tick 100 00:05:29.156 tick 100 00:05:29.156 test_end 00:05:29.156 00:05:29.156 real 0m1.324s 00:05:29.156 user 0m1.131s 00:05:29.156 sys 0m0.086s 00:05:29.156 06:00:30 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.156 06:00:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 ************************************ 00:05:29.156 END TEST event_reactor 00:05:29.156 ************************************ 00:05:29.156 06:00:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.156 06:00:30 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:29.156 06:00:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.156 06:00:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.415 ************************************ 00:05:29.415 START TEST event_reactor_perf 00:05:29.415 ************************************ 00:05:29.415 06:00:30 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.415 [2024-08-13 06:00:30.994321] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:29.415 [2024-08-13 06:00:30.994452] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69009 ] 00:05:29.415 [2024-08-13 06:00:31.140007] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.415 [2024-08-13 06:00:31.187151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.796 test_start 00:05:30.796 test_end 00:05:30.796 Performance: 364150 events per second 00:05:30.796 00:05:30.796 real 0m1.321s 00:05:30.796 user 0m1.129s 00:05:30.796 sys 0m0.085s 00:05:30.796 06:00:32 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.796 06:00:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.796 ************************************ 00:05:30.796 END TEST event_reactor_perf 00:05:30.796 ************************************ 00:05:30.796 06:00:32 event -- event/event.sh@49 -- # uname -s 00:05:30.796 06:00:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.796 06:00:32 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.796 06:00:32 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.796 06:00:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.796 06:00:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.796 ************************************ 00:05:30.796 START TEST event_scheduler 00:05:30.796 ************************************ 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.796 * Looking for test storage... 00:05:30.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:30.796 06:00:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.796 06:00:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69066 00:05:30.796 06:00:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.796 06:00:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.796 06:00:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69066 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 69066 ']' 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.796 06:00:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.796 [2024-08-13 06:00:32.531600] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:30.796 [2024-08-13 06:00:32.531734] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69066 ] 00:05:31.054 [2024-08-13 06:00:32.682940] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.054 [2024-08-13 06:00:32.737423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.054 [2024-08-13 06:00:32.737556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.054 [2024-08-13 06:00:32.737596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.054 [2024-08-13 06:00:32.737676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:31.624 06:00:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.624 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.624 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.624 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.624 POWER: Cannot set governor of lcore 0 to performance 00:05:31.624 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.624 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.624 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:31.624 POWER: Unable to set Power Management Environment for lcore 0 00:05:31.624 [2024-08-13 06:00:33.386628] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:31.624 [2024-08-13 06:00:33.386670] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:31.624 [2024-08-13 06:00:33.386717] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:31.624 [2024-08-13 06:00:33.386763] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.624 [2024-08-13 06:00:33.386786] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.624 [2024-08-13 06:00:33.386835] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.624 06:00:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.624 06:00:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 [2024-08-13 06:00:33.460094] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.884 06:00:33 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.884 06:00:33 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.884 06:00:33 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 ************************************ 00:05:31.884 START TEST scheduler_create_thread 00:05:31.884 ************************************ 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 2 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 3 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 4 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 5 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 6 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 7 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 8 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 9 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 10 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:31.884 06:00:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 06:00:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:33.261 06:00:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.261 06:00:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.261 06:00:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:33.261 06:00:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.640 ************************************ 00:05:34.640 END TEST scheduler_create_thread 00:05:34.640 ************************************ 00:05:34.640 06:00:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:34.640 00:05:34.640 real 0m2.614s 00:05:34.640 user 0m0.027s 00:05:34.640 sys 0m0.010s 00:05:34.640 06:00:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.640 06:00:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.640 06:00:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.640 06:00:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69066 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 69066 ']' 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 69066 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69066 00:05:34.640 killing process with pid 69066 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69066' 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 69066 00:05:34.640 06:00:36 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 69066 00:05:34.898 [2024-08-13 06:00:36.567026] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:35.157 00:05:35.157 real 0m4.473s 00:05:35.157 user 0m8.142s 00:05:35.157 sys 0m0.445s 00:05:35.157 06:00:36 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.157 ************************************ 00:05:35.157 END TEST event_scheduler 00:05:35.157 ************************************ 00:05:35.157 06:00:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 06:00:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:35.157 06:00:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:35.157 06:00:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.157 06:00:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.157 06:00:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 ************************************ 00:05:35.157 START TEST app_repeat 00:05:35.157 ************************************ 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=69172 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69172' 00:05:35.157 Process app_repeat pid: 69172 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:35.157 spdk_app_start Round 0 00:05:35.157 06:00:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 69172 /var/tmp/spdk-nbd.sock 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69172 ']' 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.157 06:00:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 [2024-08-13 06:00:36.946525] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:35.157 [2024-08-13 06:00:36.946645] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69172 ] 00:05:35.416 [2024-08-13 06:00:37.080088] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.416 [2024-08-13 06:00:37.134124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.416 [2024-08-13 06:00:37.134142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.352 06:00:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.352 06:00:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:36.352 06:00:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.352 Malloc0 00:05:36.352 06:00:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.610 Malloc1 00:05:36.610 06:00:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.611 06:00:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.870 /dev/nbd0 00:05:36.870 06:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.870 06:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.870 1+0 records in 00:05:36.870 1+0 records out 00:05:36.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471955 s, 8.7 MB/s 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:36.870 06:00:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:36.870 06:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.870 06:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.870 06:00:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.130 /dev/nbd1 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.130 1+0 records in 00:05:37.130 1+0 records out 00:05:37.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053916 s, 7.6 MB/s 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:37.130 06:00:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.130 06:00:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.390 { 00:05:37.390 "nbd_device": "/dev/nbd0", 00:05:37.390 "bdev_name": "Malloc0" 00:05:37.390 }, 00:05:37.390 { 00:05:37.390 "nbd_device": "/dev/nbd1", 00:05:37.390 "bdev_name": "Malloc1" 00:05:37.390 } 00:05:37.390 ]' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.390 { 00:05:37.390 "nbd_device": "/dev/nbd0", 00:05:37.390 "bdev_name": "Malloc0" 00:05:37.390 }, 00:05:37.390 { 00:05:37.390 "nbd_device": "/dev/nbd1", 00:05:37.390 "bdev_name": "Malloc1" 00:05:37.390 } 00:05:37.390 ]' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.390 /dev/nbd1' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.390 /dev/nbd1' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.390 256+0 records in 00:05:37.390 256+0 records out 00:05:37.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509555 s, 206 MB/s 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.390 256+0 records in 00:05:37.390 256+0 records out 00:05:37.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212862 s, 49.3 MB/s 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.390 256+0 records in 00:05:37.390 256+0 records out 00:05:37.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232938 s, 45.0 MB/s 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.390 06:00:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.650 06:00:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.909 06:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.168 06:00:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.168 06:00:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.440 06:00:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.718 [2024-08-13 06:00:40.273140] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.718 [2024-08-13 06:00:40.319568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.718 [2024-08-13 06:00:40.319576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.718 [2024-08-13 06:00:40.361769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.718 [2024-08-13 06:00:40.361910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.034 spdk_app_start Round 1 00:05:42.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.034 06:00:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.034 06:00:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:42.034 06:00:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 69172 /var/tmp/spdk-nbd.sock 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69172 ']' 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.034 06:00:43 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:42.034 06:00:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.034 Malloc0 00:05:42.034 06:00:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.034 Malloc1 00:05:42.294 06:00:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.294 06:00:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.294 /dev/nbd0 00:05:42.294 06:00:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.294 06:00:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.294 1+0 records in 00:05:42.294 1+0 records out 00:05:42.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426079 s, 9.6 MB/s 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:42.294 06:00:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.554 /dev/nbd1 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.554 1+0 records in 00:05:42.554 1+0 records out 00:05:42.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384826 s, 10.6 MB/s 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:42.554 06:00:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.554 06:00:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.815 { 00:05:42.815 "nbd_device": "/dev/nbd0", 00:05:42.815 "bdev_name": "Malloc0" 00:05:42.815 }, 00:05:42.815 { 00:05:42.815 "nbd_device": "/dev/nbd1", 00:05:42.815 "bdev_name": "Malloc1" 00:05:42.815 } 00:05:42.815 ]' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.815 { 00:05:42.815 "nbd_device": "/dev/nbd0", 00:05:42.815 "bdev_name": "Malloc0" 00:05:42.815 }, 00:05:42.815 { 00:05:42.815 "nbd_device": "/dev/nbd1", 00:05:42.815 "bdev_name": "Malloc1" 00:05:42.815 } 00:05:42.815 ]' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.815 /dev/nbd1' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.815 /dev/nbd1' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.815 06:00:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.075 256+0 records in 00:05:43.075 256+0 records out 00:05:43.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144154 s, 72.7 MB/s 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.075 256+0 records in 00:05:43.075 256+0 records out 00:05:43.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200299 s, 52.4 MB/s 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.075 256+0 records in 00:05:43.075 256+0 records out 00:05:43.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264903 s, 39.6 MB/s 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.075 06:00:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.335 06:00:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.335 06:00:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.594 06:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.854 06:00:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.854 06:00:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.854 06:00:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.114 [2024-08-13 06:00:45.756858] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.114 [2024-08-13 06:00:45.798568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.114 [2024-08-13 06:00:45.798600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.114 [2024-08-13 06:00:45.840619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.114 [2024-08-13 06:00:45.840683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.409 spdk_app_start Round 2 00:05:47.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.409 06:00:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.409 06:00:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.409 06:00:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 69172 /var/tmp/spdk-nbd.sock 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69172 ']' 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.409 06:00:48 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:47.409 06:00:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.409 Malloc0 00:05:47.409 06:00:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.669 Malloc1 00:05:47.669 06:00:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.669 06:00:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.929 /dev/nbd0 00:05:47.929 06:00:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.929 06:00:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.929 1+0 records in 00:05:47.929 1+0 records out 00:05:47.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431323 s, 9.5 MB/s 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:47.929 06:00:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:47.929 06:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.929 06:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.929 06:00:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.189 /dev/nbd1 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.189 1+0 records in 00:05:48.189 1+0 records out 00:05:48.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398346 s, 10.3 MB/s 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:48.189 06:00:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.189 { 00:05:48.189 "nbd_device": "/dev/nbd0", 00:05:48.189 "bdev_name": "Malloc0" 00:05:48.189 }, 00:05:48.189 { 00:05:48.189 "nbd_device": "/dev/nbd1", 00:05:48.189 "bdev_name": "Malloc1" 00:05:48.189 } 00:05:48.189 ]' 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.189 { 00:05:48.189 "nbd_device": "/dev/nbd0", 00:05:48.189 "bdev_name": "Malloc0" 00:05:48.189 }, 00:05:48.189 { 00:05:48.189 "nbd_device": "/dev/nbd1", 00:05:48.189 "bdev_name": "Malloc1" 00:05:48.189 } 00:05:48.189 ]' 00:05:48.189 06:00:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.449 /dev/nbd1' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.449 /dev/nbd1' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.449 256+0 records in 00:05:48.449 256+0 records out 00:05:48.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138749 s, 75.6 MB/s 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.449 256+0 records in 00:05:48.449 256+0 records out 00:05:48.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199402 s, 52.6 MB/s 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.449 256+0 records in 00:05:48.449 256+0 records out 00:05:48.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257625 s, 40.7 MB/s 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.449 06:00:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.709 06:00:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.969 06:00:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.969 06:00:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.969 06:00:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.969 06:00:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.970 06:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.230 06:00:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.230 06:00:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.496 06:00:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.496 [2024-08-13 06:00:51.180219] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.496 [2024-08-13 06:00:51.221987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.496 [2024-08-13 06:00:51.221995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.496 [2024-08-13 06:00:51.263823] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.496 [2024-08-13 06:00:51.263883] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.816 06:00:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 69172 /var/tmp/spdk-nbd.sock 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 69172 ']' 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:52.816 06:00:54 event.app_repeat -- event/event.sh@39 -- # killprocess 69172 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 69172 ']' 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 69172 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69172 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69172' 00:05:52.816 killing process with pid 69172 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@965 -- # kill 69172 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@970 -- # wait 69172 00:05:52.816 spdk_app_start is called in Round 0. 00:05:52.816 Shutdown signal received, stop current app iteration 00:05:52.816 Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 reinitialization... 00:05:52.816 spdk_app_start is called in Round 1. 00:05:52.816 Shutdown signal received, stop current app iteration 00:05:52.816 Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 reinitialization... 00:05:52.816 spdk_app_start is called in Round 2. 00:05:52.816 Shutdown signal received, stop current app iteration 00:05:52.816 Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 reinitialization... 00:05:52.816 spdk_app_start is called in Round 3. 00:05:52.816 Shutdown signal received, stop current app iteration 00:05:52.816 06:00:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:52.816 06:00:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:52.816 00:05:52.816 real 0m17.599s 00:05:52.816 user 0m38.869s 00:05:52.816 sys 0m2.676s 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.816 ************************************ 00:05:52.816 END TEST app_repeat 00:05:52.816 ************************************ 00:05:52.816 06:00:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.816 06:00:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:52.816 06:00:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:52.816 06:00:54 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.816 06:00:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.816 06:00:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.816 ************************************ 00:05:52.816 START TEST cpu_locks 00:05:52.816 ************************************ 00:05:52.816 06:00:54 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.076 * Looking for test storage... 00:05:53.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.076 06:00:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.076 06:00:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.076 06:00:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.076 06:00:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.076 06:00:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.076 06:00:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.076 06:00:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.076 ************************************ 00:05:53.076 START TEST default_locks 00:05:53.076 ************************************ 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69584 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 69584 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 69584 ']' 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.076 06:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.076 [2024-08-13 06:00:54.793134] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:53.076 [2024-08-13 06:00:54.793331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:05:53.334 [2024-08-13 06:00:54.940539] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.334 [2024-08-13 06:00:54.988634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.900 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.900 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:53.900 06:00:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 69584 00:05:53.900 06:00:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 69584 00:05:53.900 06:00:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.158 06:00:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 69584 00:05:54.158 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 69584 ']' 00:05:54.158 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 69584 00:05:54.158 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:54.158 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.158 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69584 00:05:54.418 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.418 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.418 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69584' 00:05:54.418 killing process with pid 69584 00:05:54.418 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 69584 00:05:54.418 06:00:55 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 69584 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69584 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@646 -- # local es=0 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69584 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # waitforlisten 69584 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 69584 ']' 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.678 ERROR: process (pid: 69584) is no longer running 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69584) - No such process 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # es=1 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.678 00:05:54.678 real 0m1.693s 00:05:54.678 user 0m1.668s 00:05:54.678 sys 0m0.577s 00:05:54.678 ************************************ 00:05:54.678 END TEST default_locks 00:05:54.678 ************************************ 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.678 06:00:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.678 06:00:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:54.678 06:00:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.678 06:00:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.678 06:00:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.678 ************************************ 00:05:54.678 START TEST default_locks_via_rpc 00:05:54.678 ************************************ 00:05:54.678 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:54.678 06:00:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69637 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 69637 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69637 ']' 00:05:54.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.679 06:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.942 [2024-08-13 06:00:56.554367] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:54.942 [2024-08-13 06:00:56.554506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69637 ] 00:05:54.942 [2024-08-13 06:00:56.701406] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.207 [2024-08-13 06:00:56.750855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 69637 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 69637 00:05:55.774 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 69637 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 69637 ']' 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 69637 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69637 00:05:56.341 killing process with pid 69637 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69637' 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 69637 00:05:56.341 06:00:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 69637 00:05:56.600 00:05:56.600 real 0m1.821s 00:05:56.600 user 0m1.783s 00:05:56.600 sys 0m0.647s 00:05:56.600 ************************************ 00:05:56.600 END TEST default_locks_via_rpc 00:05:56.600 ************************************ 00:05:56.600 06:00:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.600 06:00:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.600 06:00:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.600 06:00:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.600 06:00:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.600 06:00:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.600 ************************************ 00:05:56.600 START TEST non_locking_app_on_locked_coremask 00:05:56.600 ************************************ 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69689 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 69689 /var/tmp/spdk.sock 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69689 ']' 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.600 06:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.861 [2024-08-13 06:00:58.438299] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:56.861 [2024-08-13 06:00:58.438524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69689 ] 00:05:56.861 [2024-08-13 06:00:58.585526] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.861 [2024-08-13 06:00:58.635175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69699 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 69699 /var/tmp/spdk2.sock 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69699 ']' 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.797 06:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.797 [2024-08-13 06:00:59.345063] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:05:57.797 [2024-08-13 06:00:59.345270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69699 ] 00:05:57.797 [2024-08-13 06:00:59.481647] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.797 [2024-08-13 06:00:59.481711] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.797 [2024-08-13 06:00:59.583785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 69689 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69689 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 69689 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69689 ']' 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69689 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.731 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69689 00:05:58.990 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.990 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.990 killing process with pid 69689 00:05:58.990 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69689' 00:05:58.990 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69689 00:05:58.990 06:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69689 00:05:59.556 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 69699 00:05:59.556 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69699 ']' 00:05:59.556 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69699 00:05:59.556 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:59.556 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.556 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69699 00:05:59.815 killing process with pid 69699 00:05:59.815 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.815 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.815 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69699' 00:05:59.815 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69699 00:05:59.815 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69699 00:06:00.074 ************************************ 00:06:00.074 END TEST non_locking_app_on_locked_coremask 00:06:00.074 ************************************ 00:06:00.074 00:06:00.074 real 0m3.402s 00:06:00.074 user 0m3.556s 00:06:00.074 sys 0m1.014s 00:06:00.074 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.074 06:01:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.074 06:01:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.074 06:01:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.074 06:01:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.074 06:01:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.074 ************************************ 00:06:00.074 START TEST locking_app_on_unlocked_coremask 00:06:00.074 ************************************ 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69763 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 69763 /var/tmp/spdk.sock 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69763 ']' 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.074 06:01:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.333 [2024-08-13 06:01:01.901906] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:00.333 [2024-08-13 06:01:01.902489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69763 ] 00:06:00.333 [2024-08-13 06:01:02.050858] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.333 [2024-08-13 06:01:02.051003] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.333 [2024-08-13 06:01:02.095774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69779 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 69779 /var/tmp/spdk2.sock 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69779 ']' 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.269 06:01:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.269 [2024-08-13 06:01:02.813956] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:01.269 [2024-08-13 06:01:02.814167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69779 ] 00:06:01.269 [2024-08-13 06:01:02.948648] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.269 [2024-08-13 06:01:03.042926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.203 06:01:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.203 06:01:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:02.203 06:01:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 69779 00:06:02.203 06:01:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69779 00:06:02.203 06:01:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 69763 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69763 ']' 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 69763 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69763 00:06:03.138 killing process with pid 69763 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.138 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69763' 00:06:03.139 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 69763 00:06:03.139 06:01:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 69763 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 69779 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69779 ']' 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 69779 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69779 00:06:03.706 killing process with pid 69779 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69779' 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 69779 00:06:03.706 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 69779 00:06:04.273 00:06:04.273 real 0m4.023s 00:06:04.273 user 0m4.224s 00:06:04.273 sys 0m1.266s 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.273 ************************************ 00:06:04.273 END TEST locking_app_on_unlocked_coremask 00:06:04.273 ************************************ 00:06:04.273 06:01:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.273 06:01:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.273 06:01:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.273 06:01:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.273 ************************************ 00:06:04.273 START TEST locking_app_on_locked_coremask 00:06:04.273 ************************************ 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69848 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 69848 /var/tmp/spdk.sock 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69848 ']' 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.273 06:01:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.273 [2024-08-13 06:01:05.985756] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:04.273 [2024-08-13 06:01:05.985886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69848 ] 00:06:04.531 [2024-08-13 06:01:06.112631] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.531 [2024-08-13 06:01:06.156257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69864 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69864 /var/tmp/spdk2.sock 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69864 /var/tmp/spdk2.sock 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # waitforlisten 69864 /var/tmp/spdk2.sock 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69864 ']' 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.097 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.098 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.098 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.098 06:01:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 [2024-08-13 06:01:06.887608] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:05.098 [2024-08-13 06:01:06.887827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69864 ] 00:06:05.356 [2024-08-13 06:01:07.023150] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69848 has claimed it. 00:06:05.356 [2024-08-13 06:01:07.023224] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.922 ERROR: process (pid: 69864) is no longer running 00:06:05.923 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69864) - No such process 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 69848 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69848 00:06:05.923 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 69848 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69848 ']' 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69848 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69848 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69848' 00:06:06.182 killing process with pid 69848 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69848 00:06:06.182 06:01:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69848 00:06:06.756 00:06:06.756 real 0m2.346s 00:06:06.756 user 0m2.529s 00:06:06.756 sys 0m0.653s 00:06:06.756 06:01:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.756 06:01:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.756 ************************************ 00:06:06.756 END TEST locking_app_on_locked_coremask 00:06:06.756 ************************************ 00:06:06.756 06:01:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.756 06:01:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.756 06:01:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.756 06:01:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.756 ************************************ 00:06:06.756 START TEST locking_overlapped_coremask 00:06:06.756 ************************************ 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69906 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 69906 /var/tmp/spdk.sock 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 69906 ']' 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.756 06:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.756 [2024-08-13 06:01:08.408954] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:06.756 [2024-08-13 06:01:08.409103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69906 ] 00:06:07.028 [2024-08-13 06:01:08.553481] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.028 [2024-08-13 06:01:08.604677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.028 [2024-08-13 06:01:08.604768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.028 [2024-08-13 06:01:08.604879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69924 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69924 /var/tmp/spdk2.sock 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69924 /var/tmp/spdk2.sock 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # waitforlisten 69924 /var/tmp/spdk2.sock 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 69924 ']' 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.595 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.595 [2024-08-13 06:01:09.309330] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:07.595 [2024-08-13 06:01:09.309530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69924 ] 00:06:07.853 [2024-08-13 06:01:09.444957] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69906 has claimed it. 00:06:07.853 [2024-08-13 06:01:09.445017] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.420 ERROR: process (pid: 69924) is no longer running 00:06:08.420 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69924) - No such process 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 69906 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 69906 ']' 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 69906 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69906 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69906' 00:06:08.420 killing process with pid 69906 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 69906 00:06:08.420 06:01:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 69906 00:06:08.680 00:06:08.680 real 0m2.043s 00:06:08.680 user 0m5.400s 00:06:08.680 sys 0m0.501s 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.680 ************************************ 00:06:08.680 END TEST locking_overlapped_coremask 00:06:08.680 ************************************ 00:06:08.680 06:01:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.680 06:01:10 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.680 06:01:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.680 06:01:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.680 ************************************ 00:06:08.680 START TEST locking_overlapped_coremask_via_rpc 00:06:08.680 ************************************ 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69966 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 69966 /var/tmp/spdk.sock 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69966 ']' 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.680 06:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.939 [2024-08-13 06:01:10.519788] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:08.939 [2024-08-13 06:01:10.519998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69966 ] 00:06:08.939 [2024-08-13 06:01:10.666758] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.939 [2024-08-13 06:01:10.666824] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.939 [2024-08-13 06:01:10.718243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.939 [2024-08-13 06:01:10.718355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.939 [2024-08-13 06:01:10.718462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69984 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 69984 /var/tmp/spdk2.sock 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69984 ']' 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.875 06:01:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.875 [2024-08-13 06:01:11.414132] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:09.875 [2024-08-13 06:01:11.414334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69984 ] 00:06:09.875 [2024-08-13 06:01:11.556521] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.875 [2024-08-13 06:01:11.556582] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.875 [2024-08-13 06:01:11.659975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.875 [2024-08-13 06:01:11.660052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.875 [2024-08-13 06:01:11.660107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@646 -- # local es=0 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.811 [2024-08-13 06:01:12.285255] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69966 has claimed it. 00:06:10.811 request: 00:06:10.811 { 00:06:10.811 "method": "framework_enable_cpumask_locks", 00:06:10.811 "req_id": 1 00:06:10.811 } 00:06:10.811 Got JSON-RPC error response 00:06:10.811 response: 00:06:10.811 { 00:06:10.811 "code": -32603, 00:06:10.811 "message": "Failed to claim CPU core: 2" 00:06:10.811 } 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # es=1 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 69966 /var/tmp/spdk.sock 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69966 ']' 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 69984 /var/tmp/spdk2.sock 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69984 ']' 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.811 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.070 00:06:11.070 real 0m2.303s 00:06:11.070 user 0m1.066s 00:06:11.070 sys 0m0.171s 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.070 06:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.070 ************************************ 00:06:11.070 END TEST locking_overlapped_coremask_via_rpc 00:06:11.070 ************************************ 00:06:11.070 06:01:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.070 06:01:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 69966 ]] 00:06:11.070 06:01:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 69966 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69966 ']' 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69966 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69966 00:06:11.070 killing process with pid 69966 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69966' 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 69966 00:06:11.070 06:01:12 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 69966 00:06:11.636 06:01:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 69984 ]] 00:06:11.637 06:01:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 69984 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69984 ']' 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69984 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69984 00:06:11.637 killing process with pid 69984 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69984' 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 69984 00:06:11.637 06:01:13 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 69984 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 69966 ]] 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 69966 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69966 ']' 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69966 00:06:11.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (69966) - No such process 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 69966 is not found' 00:06:11.895 Process with pid 69966 is not found 00:06:11.895 Process with pid 69984 is not found 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 69984 ]] 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 69984 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69984 ']' 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69984 00:06:11.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (69984) - No such process 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 69984 is not found' 00:06:11.895 06:01:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.895 00:06:11.895 real 0m19.085s 00:06:11.895 user 0m31.446s 00:06:11.895 sys 0m5.879s 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.895 06:01:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.895 ************************************ 00:06:11.895 END TEST cpu_locks 00:06:11.895 ************************************ 00:06:12.152 00:06:12.152 real 0m45.654s 00:06:12.152 user 1m24.983s 00:06:12.152 sys 0m9.640s 00:06:12.152 06:01:13 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.152 06:01:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.152 ************************************ 00:06:12.152 END TEST event 00:06:12.152 ************************************ 00:06:12.152 06:01:13 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.152 06:01:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.152 06:01:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.152 06:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.152 ************************************ 00:06:12.152 START TEST thread 00:06:12.152 ************************************ 00:06:12.152 06:01:13 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.152 * Looking for test storage... 00:06:12.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:12.152 06:01:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.152 06:01:13 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:12.152 06:01:13 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.152 06:01:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.152 ************************************ 00:06:12.152 START TEST thread_poller_perf 00:06:12.152 ************************************ 00:06:12.152 06:01:13 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.152 [2024-08-13 06:01:13.920964] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:12.152 [2024-08-13 06:01:13.921099] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70114 ] 00:06:12.411 [2024-08-13 06:01:14.050496] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.411 [2024-08-13 06:01:14.094358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.411 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.800 ====================================== 00:06:13.800 busy:2302163864 (cyc) 00:06:13.800 total_run_count: 413000 00:06:13.800 tsc_hz: 2290000000 (cyc) 00:06:13.800 ====================================== 00:06:13.800 poller_cost: 5574 (cyc), 2434 (nsec) 00:06:13.800 00:06:13.800 real 0m1.304s 00:06:13.800 user 0m1.134s 00:06:13.800 sys 0m0.066s 00:06:13.800 06:01:15 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.800 06:01:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.800 ************************************ 00:06:13.800 END TEST thread_poller_perf 00:06:13.800 ************************************ 00:06:13.800 06:01:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.800 06:01:15 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:13.800 06:01:15 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.800 06:01:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.800 ************************************ 00:06:13.800 START TEST thread_poller_perf 00:06:13.800 ************************************ 00:06:13.800 06:01:15 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.800 [2024-08-13 06:01:15.299528] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:13.800 [2024-08-13 06:01:15.299700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:06:13.800 [2024-08-13 06:01:15.443496] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.800 [2024-08-13 06:01:15.487673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.800 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.181 ====================================== 00:06:15.181 busy:2293161138 (cyc) 00:06:15.181 total_run_count: 5397000 00:06:15.181 tsc_hz: 2290000000 (cyc) 00:06:15.181 ====================================== 00:06:15.181 poller_cost: 424 (cyc), 185 (nsec) 00:06:15.181 00:06:15.181 real 0m1.321s 00:06:15.181 user 0m1.135s 00:06:15.181 sys 0m0.081s 00:06:15.181 06:01:16 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.181 06:01:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.181 ************************************ 00:06:15.181 END TEST thread_poller_perf 00:06:15.181 ************************************ 00:06:15.181 06:01:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.181 00:06:15.181 real 0m2.869s 00:06:15.181 user 0m2.360s 00:06:15.181 sys 0m0.303s 00:06:15.181 06:01:16 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.181 06:01:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.181 ************************************ 00:06:15.181 END TEST thread 00:06:15.181 ************************************ 00:06:15.181 06:01:16 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:15.181 06:01:16 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:15.181 06:01:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.181 06:01:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.181 06:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.181 ************************************ 00:06:15.181 START TEST app_cmdline 00:06:15.181 ************************************ 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:15.181 * Looking for test storage... 00:06:15.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:15.181 06:01:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:15.181 06:01:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=70220 00:06:15.181 06:01:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:15.181 06:01:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 70220 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 70220 ']' 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.181 06:01:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.181 [2024-08-13 06:01:16.904874] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:15.181 [2024-08-13 06:01:16.905086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:06:15.441 [2024-08-13 06:01:17.049770] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.441 [2024-08-13 06:01:17.096224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.011 06:01:17 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.011 06:01:17 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:16.011 06:01:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:16.270 { 00:06:16.270 "version": "SPDK v24.09-pre git sha1 7c739692e", 00:06:16.270 "fields": { 00:06:16.270 "major": 24, 00:06:16.270 "minor": 9, 00:06:16.270 "patch": 0, 00:06:16.270 "suffix": "-pre", 00:06:16.270 "commit": "7c739692e" 00:06:16.270 } 00:06:16.270 } 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:16.270 06:01:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@646 -- # local es=0 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:16.270 06:01:17 app_cmdline -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.529 request: 00:06:16.529 { 00:06:16.529 "method": "env_dpdk_get_mem_stats", 00:06:16.529 "req_id": 1 00:06:16.529 } 00:06:16.529 Got JSON-RPC error response 00:06:16.529 response: 00:06:16.529 { 00:06:16.529 "code": -32601, 00:06:16.529 "message": "Method not found" 00:06:16.529 } 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@649 -- # es=1 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:16.529 06:01:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 70220 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 70220 ']' 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 70220 00:06:16.529 06:01:18 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70220 00:06:16.530 killing process with pid 70220 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70220' 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@965 -- # kill 70220 00:06:16.530 06:01:18 app_cmdline -- common/autotest_common.sh@970 -- # wait 70220 00:06:17.097 00:06:17.097 real 0m1.915s 00:06:17.097 user 0m2.208s 00:06:17.097 sys 0m0.493s 00:06:17.097 ************************************ 00:06:17.097 END TEST app_cmdline 00:06:17.097 ************************************ 00:06:17.097 06:01:18 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.097 06:01:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 06:01:18 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.097 06:01:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.097 06:01:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.097 06:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 ************************************ 00:06:17.097 START TEST version 00:06:17.097 ************************************ 00:06:17.097 06:01:18 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.097 * Looking for test storage... 00:06:17.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.097 06:01:18 version -- app/version.sh@17 -- # get_header_version major 00:06:17.097 06:01:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # cut -f2 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.097 06:01:18 version -- app/version.sh@17 -- # major=24 00:06:17.097 06:01:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.097 06:01:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # cut -f2 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.097 06:01:18 version -- app/version.sh@18 -- # minor=9 00:06:17.097 06:01:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:17.097 06:01:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # cut -f2 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.097 06:01:18 version -- app/version.sh@19 -- # patch=0 00:06:17.097 06:01:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:17.097 06:01:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # cut -f2 00:06:17.097 06:01:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.097 06:01:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:17.097 06:01:18 version -- app/version.sh@22 -- # version=24.9 00:06:17.097 06:01:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:17.097 06:01:18 version -- app/version.sh@28 -- # version=24.9rc0 00:06:17.097 06:01:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:17.097 06:01:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:17.355 06:01:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:17.355 06:01:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:17.355 00:06:17.355 real 0m0.218s 00:06:17.355 user 0m0.119s 00:06:17.355 sys 0m0.151s 00:06:17.355 ************************************ 00:06:17.355 END TEST version 00:06:17.355 ************************************ 00:06:17.355 06:01:18 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.355 06:01:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:17.355 06:01:18 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:17.355 06:01:18 -- spdk/autotest.sh@201 -- # [[ 1 -eq 1 ]] 00:06:17.355 06:01:18 -- spdk/autotest.sh@202 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:17.355 06:01:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.355 06:01:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.355 06:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.355 ************************************ 00:06:17.355 START TEST bdev_raid 00:06:17.355 ************************************ 00:06:17.355 06:01:18 bdev_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:17.355 * Looking for test storage... 00:06:17.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:17.355 06:01:19 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.355 06:01:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.355 06:01:19 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:17.355 06:01:19 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:06:17.355 06:01:19 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:06:17.355 06:01:19 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:06:17.355 06:01:19 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:17.355 06:01:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:17.355 06:01:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.355 06:01:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.355 ************************************ 00:06:17.355 START TEST raid0_resize_superblock_test 00:06:17.355 ************************************ 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1121 -- # raid_resize_superblock_test 0 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=70370 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 70370' 00:06:17.355 Process raid pid: 70370 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 70370 /var/tmp/spdk-raid.sock 00:06:17.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 70370 ']' 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.355 06:01:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.613 [2024-08-13 06:01:19.184143] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:17.613 [2024-08-13 06:01:19.184270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.613 [2024-08-13 06:01:19.313990] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.613 [2024-08-13 06:01:19.365714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.871 [2024-08-13 06:01:19.410023] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:17.871 [2024-08-13 06:01:19.410074] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.436 06:01:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.436 06:01:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:06:18.436 06:01:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:06:18.694 malloc0 00:06:18.694 06:01:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:18.952 [2024-08-13 06:01:20.487809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:18.952 [2024-08-13 06:01:20.488040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.952 [2024-08-13 06:01:20.488113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:18.952 [2024-08-13 06:01:20.488147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.952 [2024-08-13 06:01:20.490938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.952 [2024-08-13 06:01:20.490981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:18.952 pt0 00:06:18.952 06:01:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:06:19.211 9456544a-18e1-4d95-bdc5-fe2abf7a00c3 00:06:19.211 06:01:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:06:19.469 757d9874-436e-459c-bf9b-4d98b732b6bf 00:06:19.469 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:06:19.469 74734b00-d24d-45f3-b49d-9981cff4b98a 00:06:19.469 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:06:19.469 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:06:19.728 [2024-08-13 06:01:21.393274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 757d9874-436e-459c-bf9b-4d98b732b6bf is claimed 00:06:19.728 [2024-08-13 06:01:21.393416] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 74734b00-d24d-45f3-b49d-9981cff4b98a is claimed 00:06:19.728 [2024-08-13 06:01:21.393578] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:19.728 [2024-08-13 06:01:21.393590] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:19.728 [2024-08-13 06:01:21.393894] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:19.728 [2024-08-13 06:01:21.394066] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:19.728 [2024-08-13 06:01:21.394083] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:19.728 [2024-08-13 06:01:21.394284] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.728 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:19.728 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:19.992 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:06:19.992 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:19.992 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:20.258 [2024-08-13 06:01:21.996514] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.258 06:01:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.258 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:06:20.258 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:06:20.517 [2024-08-13 06:01:22.200097] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:20.517 [2024-08-13 06:01:22.200141] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '757d9874-436e-459c-bf9b-4d98b732b6bf' was resized: old size 131072, new size 204800 00:06:20.517 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:06:20.775 [2024-08-13 06:01:22.403679] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:20.775 [2024-08-13 06:01:22.403721] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '74734b00-d24d-45f3-b49d-9981cff4b98a' was resized: old size 131072, new size 204800 00:06:20.775 [2024-08-13 06:01:22.403753] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:20.775 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:20.775 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:06:21.034 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:06:21.034 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:21.034 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:06:21.294 [2024-08-13 06:01:23.034653] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:21.294 06:01:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:21.294 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:06:21.294 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:06:21.552 [2024-08-13 06:01:23.238117] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:21.552 [2024-08-13 06:01:23.238202] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:21.552 [2024-08-13 06:01:23.238222] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:21.552 [2024-08-13 06:01:23.238235] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:21.552 [2024-08-13 06:01:23.238346] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.552 [2024-08-13 06:01:23.238386] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.552 [2024-08-13 06:01:23.238396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:21.553 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:21.811 [2024-08-13 06:01:23.441684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:21.811 [2024-08-13 06:01:23.441756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.811 [2024-08-13 06:01:23.441780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:21.811 [2024-08-13 06:01:23.441789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.811 [2024-08-13 06:01:23.443867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.811 [2024-08-13 06:01:23.443950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:21.811 [2024-08-13 06:01:23.445533] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 757d9874-436e-459c-bf9b-4d98b732b6bf 00:06:21.811 [2024-08-13 06:01:23.445618] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 757d9874-436e-459c-bf9b-4d98b732b6bf is claimed 00:06:21.811 [2024-08-13 06:01:23.445709] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 74734b00-d24d-45f3-b49d-9981cff4b98a 00:06:21.811 [2024-08-13 06:01:23.445724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 74734b00-d24d-45f3-b49d-9981cff4b98a is claimed 00:06:21.811 [2024-08-13 06:01:23.445828] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 74734b00-d24d-45f3-b49d-9981cff4b98a (2) smaller than existing raid bdev Raid (3) 00:06:21.811 [2024-08-13 06:01:23.445853] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:21.811 [2024-08-13 06:01:23.445861] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:21.811 [2024-08-13 06:01:23.446097] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:21.811 [2024-08-13 06:01:23.446249] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:21.811 [2024-08-13 06:01:23.446259] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:21.811 [2024-08-13 06:01:23.446421] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.811 pt0 00:06:21.811 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:21.811 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:21.811 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:21.811 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:06:22.076 [2024-08-13 06:01:23.649832] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 70370 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 70370 ']' 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # kill -0 70370 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@951 -- # uname 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70370 00:06:22.076 killing process with pid 70370 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70370' 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@965 -- # kill 70370 00:06:22.076 [2024-08-13 06:01:23.715848] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:22.076 [2024-08-13 06:01:23.715954] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.076 [2024-08-13 06:01:23.716005] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.076 06:01:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # wait 70370 00:06:22.076 [2024-08-13 06:01:23.716014] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:22.341 [2024-08-13 06:01:23.876850] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:22.341 06:01:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:06:22.341 00:06:22.341 real 0m5.012s 00:06:22.341 user 0m8.057s 00:06:22.341 sys 0m0.869s 00:06:22.341 ************************************ 00:06:22.341 END TEST raid0_resize_superblock_test 00:06:22.341 ************************************ 00:06:22.341 06:01:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.341 06:01:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.600 06:01:24 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:22.600 06:01:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:22.600 06:01:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.600 06:01:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:22.600 ************************************ 00:06:22.600 START TEST raid1_resize_superblock_test 00:06:22.600 ************************************ 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1121 -- # raid_resize_superblock_test 1 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=70481 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 70481' 00:06:22.600 Process raid pid: 70481 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 70481 /var/tmp/spdk-raid.sock 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 70481 ']' 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:22.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.600 06:01:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.600 [2024-08-13 06:01:24.262614] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:22.600 [2024-08-13 06:01:24.262737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.859 [2024-08-13 06:01:24.411469] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.859 [2024-08-13 06:01:24.460593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.859 [2024-08-13 06:01:24.502889] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.859 [2024-08-13 06:01:24.502924] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.422 06:01:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.422 06:01:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:06:23.422 06:01:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:06:23.681 malloc0 00:06:23.681 06:01:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:23.939 [2024-08-13 06:01:25.619984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:23.939 [2024-08-13 06:01:25.620180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.939 [2024-08-13 06:01:25.620214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:23.939 [2024-08-13 06:01:25.620224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.939 [2024-08-13 06:01:25.622485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.939 [2024-08-13 06:01:25.622528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:23.939 pt0 00:06:23.939 06:01:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:06:24.198 b43f19af-fb20-4e36-b9dc-25a06e867c12 00:06:24.198 06:01:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:06:24.456 7c148c6d-5282-4ddb-82e5-0785581a6884 00:06:24.456 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:06:24.715 8a7cb52b-344c-4a8e-93ad-a29527e9b25c 00:06:24.715 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:06:24.715 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:06:24.974 [2024-08-13 06:01:26.547557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7c148c6d-5282-4ddb-82e5-0785581a6884 is claimed 00:06:24.974 [2024-08-13 06:01:26.547697] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a7cb52b-344c-4a8e-93ad-a29527e9b25c is claimed 00:06:24.974 [2024-08-13 06:01:26.547843] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:24.974 [2024-08-13 06:01:26.547853] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:24.974 [2024-08-13 06:01:26.548163] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:24.974 [2024-08-13 06:01:26.548339] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:24.974 [2024-08-13 06:01:26.548363] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:24.974 [2024-08-13 06:01:26.548552] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.974 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:24.974 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.232 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:25.491 [2024-08-13 06:01:27.138666] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:25.491 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.491 06:01:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:25.491 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:06:25.491 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:06:25.756 [2024-08-13 06:01:27.346334] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:25.756 [2024-08-13 06:01:27.346377] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7c148c6d-5282-4ddb-82e5-0785581a6884' was resized: old size 131072, new size 204800 00:06:25.756 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:06:26.024 [2024-08-13 06:01:27.549934] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:26.024 [2024-08-13 06:01:27.549970] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8a7cb52b-344c-4a8e-93ad-a29527e9b25c' was resized: old size 131072, new size 204800 00:06:26.024 [2024-08-13 06:01:27.550000] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:26.024 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:26.024 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:06:26.024 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:06:26.024 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:26.024 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:06:26.283 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:06:26.283 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:26.283 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:26.283 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:26.283 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:06:26.542 [2024-08-13 06:01:28.148924] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:26.542 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:26.542 06:01:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:26.542 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:06:26.542 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:06:26.542 [2024-08-13 06:01:28.324464] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:26.542 [2024-08-13 06:01:28.324633] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:26.542 [2024-08-13 06:01:28.324659] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:26.542 [2024-08-13 06:01:28.324870] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:26.542 [2024-08-13 06:01:28.325051] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.542 [2024-08-13 06:01:28.325112] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:26.542 [2024-08-13 06:01:28.325123] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:26.801 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:26.801 [2024-08-13 06:01:28.528036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:26.801 [2024-08-13 06:01:28.528118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.801 [2024-08-13 06:01:28.528144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:26.801 [2024-08-13 06:01:28.528153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.801 [2024-08-13 06:01:28.530437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.801 [2024-08-13 06:01:28.530476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:26.801 [2024-08-13 06:01:28.532100] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7c148c6d-5282-4ddb-82e5-0785581a6884 00:06:26.801 [2024-08-13 06:01:28.532162] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7c148c6d-5282-4ddb-82e5-0785581a6884 is claimed 00:06:26.801 [2024-08-13 06:01:28.532269] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8a7cb52b-344c-4a8e-93ad-a29527e9b25c 00:06:26.801 [2024-08-13 06:01:28.532284] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a7cb52b-344c-4a8e-93ad-a29527e9b25c is claimed 00:06:26.801 [2024-08-13 06:01:28.532457] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8a7cb52b-344c-4a8e-93ad-a29527e9b25c (2) smaller than existing raid bdev Raid (3) 00:06:26.801 [2024-08-13 06:01:28.532490] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:26.802 [2024-08-13 06:01:28.532500] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:26.802 [2024-08-13 06:01:28.532759] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:26.802 [2024-08-13 06:01:28.532924] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:26.802 [2024-08-13 06:01:28.532935] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:26.802 [2024-08-13 06:01:28.533079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.802 pt0 00:06:26.802 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:26.802 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:26.802 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:26.802 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:06:27.061 [2024-08-13 06:01:28.740129] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 70481 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 70481 ']' 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # kill -0 70481 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@951 -- # uname 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70481 00:06:27.061 killing process with pid 70481 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70481' 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@965 -- # kill 70481 00:06:27.061 [2024-08-13 06:01:28.802409] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.061 [2024-08-13 06:01:28.802505] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.061 06:01:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # wait 70481 00:06:27.061 [2024-08-13 06:01:28.802561] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.061 [2024-08-13 06:01:28.802570] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:27.320 [2024-08-13 06:01:28.962567] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.579 ************************************ 00:06:27.579 END TEST raid1_resize_superblock_test 00:06:27.579 06:01:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:06:27.579 00:06:27.579 real 0m5.015s 00:06:27.579 user 0m8.029s 00:06:27.579 sys 0m0.890s 00:06:27.579 06:01:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.579 06:01:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.579 ************************************ 00:06:27.579 06:01:29 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:06:27.579 06:01:29 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:06:27.579 06:01:29 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:06:27.579 06:01:29 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:06:27.579 06:01:29 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:06:27.579 06:01:29 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:27.579 06:01:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:27.579 06:01:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.579 06:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.579 ************************************ 00:06:27.579 START TEST raid_function_test_raid0 00:06:27.579 ************************************ 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1121 -- # raid_function_test raid0 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=70600 00:06:27.579 Process raid pid: 70600 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 70600' 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 70600 /var/tmp/spdk-raid.sock 00:06:27.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@827 -- # '[' -z 70600 ']' 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.579 06:01:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:27.579 [2024-08-13 06:01:29.367244] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:27.579 [2024-08-13 06:01:29.367362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.838 [2024-08-13 06:01:29.495883] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.839 [2024-08-13 06:01:29.543300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.839 [2024-08-13 06:01:29.586270] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.839 [2024-08-13 06:01:29.586307] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # return 0 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:06:28.774 [2024-08-13 06:01:30.451505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:28.774 [2024-08-13 06:01:30.453846] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:28.774 [2024-08-13 06:01:30.454020] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:28.774 [2024-08-13 06:01:30.454063] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:28.774 [2024-08-13 06:01:30.454428] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:28.774 [2024-08-13 06:01:30.454604] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:28.774 [2024-08-13 06:01:30.454623] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:28.774 [2024-08-13 06:01:30.454784] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.774 Base_1 00:06:28.774 Base_2 00:06:28.774 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:28.775 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:06:28.775 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:29.031 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.032 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:29.032 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:06:29.290 [2024-08-13 06:01:30.862829] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:29.290 /dev/nbd0 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@865 -- # local i 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # break 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.290 1+0 records in 00:06:29.290 1+0 records out 00:06:29.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274354 s, 14.9 MB/s 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # size=4096 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # return 0 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:29.290 06:01:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.549 { 00:06:29.549 "nbd_device": "/dev/nbd0", 00:06:29.549 "bdev_name": "raid" 00:06:29.549 } 00:06:29.549 ]' 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.549 { 00:06:29.549 "nbd_device": "/dev/nbd0", 00:06:29.549 "bdev_name": "raid" 00:06:29.549 } 00:06:29.549 ]' 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:29.549 4096+0 records in 00:06:29.549 4096+0 records out 00:06:29.549 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0275445 s, 76.1 MB/s 00:06:29.549 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:29.808 4096+0 records in 00:06:29.808 4096+0 records out 00:06:29.808 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.194606 s, 10.8 MB/s 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:29.808 128+0 records in 00:06:29.808 128+0 records out 00:06:29.808 65536 bytes (66 kB, 64 KiB) copied, 0.00117627 s, 55.7 MB/s 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:29.808 2035+0 records in 00:06:29.808 2035+0 records out 00:06:29.808 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.014768 s, 70.6 MB/s 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:29.808 456+0 records in 00:06:29.808 456+0 records out 00:06:29.808 233472 bytes (233 kB, 228 KiB) copied, 0.00282323 s, 82.7 MB/s 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.808 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:06:30.067 [2024-08-13 06:01:31.736485] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:30.067 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:30.325 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.325 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.325 06:01:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 70600 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@946 -- # '[' -z 70600 ']' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # kill -0 70600 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # uname 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70600 00:06:30.325 killing process with pid 70600 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70600' 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@965 -- # kill 70600 00:06:30.325 [2024-08-13 06:01:32.075258] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.325 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # wait 70600 00:06:30.325 [2024-08-13 06:01:32.075394] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.325 [2024-08-13 06:01:32.075453] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.325 [2024-08-13 06:01:32.075471] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:30.325 [2024-08-13 06:01:32.099250] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.584 ************************************ 00:06:30.584 END TEST raid_function_test_raid0 00:06:30.584 ************************************ 00:06:30.584 06:01:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:06:30.584 00:06:30.584 real 0m3.050s 00:06:30.584 user 0m4.073s 00:06:30.584 sys 0m0.917s 00:06:30.584 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.584 06:01:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:30.842 06:01:32 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:06:30.842 06:01:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:30.842 06:01:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.842 06:01:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.842 ************************************ 00:06:30.842 START TEST raid_function_test_concat 00:06:30.842 ************************************ 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1121 -- # raid_function_test concat 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=70724 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:30.842 Process raid pid: 70724 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 70724' 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 70724 /var/tmp/spdk-raid.sock 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@827 -- # '[' -z 70724 ']' 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:30.842 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.843 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:30.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:30.843 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.843 06:01:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 [2024-08-13 06:01:32.487183] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:30.843 [2024-08-13 06:01:32.487377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.843 [2024-08-13 06:01:32.632514] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.119 [2024-08-13 06:01:32.685440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.119 [2024-08-13 06:01:32.728330] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.119 [2024-08-13 06:01:32.728475] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # return 0 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:06:31.694 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:06:31.953 [2024-08-13 06:01:33.545978] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:31.953 [2024-08-13 06:01:33.548218] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:31.953 [2024-08-13 06:01:33.548397] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:31.953 [2024-08-13 06:01:33.548425] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:31.953 [2024-08-13 06:01:33.548774] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:31.953 [2024-08-13 06:01:33.548950] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:31.953 [2024-08-13 06:01:33.548967] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:31.953 [2024-08-13 06:01:33.549167] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.953 Base_1 00:06:31.953 Base_2 00:06:31.953 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:31.953 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:06:31.953 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:32.213 06:01:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:06:32.213 [2024-08-13 06:01:33.985214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:32.213 /dev/nbd0 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@865 -- # local i 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # break 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.473 1+0 records in 00:06:32.473 1+0 records out 00:06:32.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040625 s, 10.1 MB/s 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # return 0 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:32.473 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.473 { 00:06:32.473 "nbd_device": "/dev/nbd0", 00:06:32.473 "bdev_name": "raid" 00:06:32.473 } 00:06:32.473 ]' 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.732 { 00:06:32.732 "nbd_device": "/dev/nbd0", 00:06:32.732 "bdev_name": "raid" 00:06:32.732 } 00:06:32.732 ]' 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:32.732 4096+0 records in 00:06:32.732 4096+0 records out 00:06:32.732 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0347665 s, 60.3 MB/s 00:06:32.732 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:32.991 4096+0 records in 00:06:32.991 4096+0 records out 00:06:32.991 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.193583 s, 10.8 MB/s 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:32.991 128+0 records in 00:06:32.991 128+0 records out 00:06:32.991 65536 bytes (66 kB, 64 KiB) copied, 0.0010918 s, 60.0 MB/s 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:32.991 2035+0 records in 00:06:32.991 2035+0 records out 00:06:32.991 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142977 s, 72.9 MB/s 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:06:32.991 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:32.991 456+0 records in 00:06:32.991 456+0 records out 00:06:32.991 233472 bytes (233 kB, 228 KiB) copied, 0.00361985 s, 64.5 MB/s 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.992 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.250 [2024-08-13 06:01:34.886361] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:33.250 06:01:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 70724 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@946 -- # '[' -z 70724 ']' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # kill -0 70724 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # uname 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70724 00:06:33.509 killing process with pid 70724 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70724' 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@965 -- # kill 70724 00:06:33.509 [2024-08-13 06:01:35.189543] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.509 [2024-08-13 06:01:35.189683] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.509 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # wait 70724 00:06:33.509 [2024-08-13 06:01:35.189738] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.509 [2024-08-13 06:01:35.189753] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:33.509 [2024-08-13 06:01:35.213004] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.768 06:01:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:06:33.768 00:06:33.768 real 0m3.036s 00:06:33.768 user 0m4.074s 00:06:33.768 sys 0m0.907s 00:06:33.768 ************************************ 00:06:33.768 END TEST raid_function_test_concat 00:06:33.768 ************************************ 00:06:33.768 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.768 06:01:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:33.768 06:01:35 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:06:33.768 06:01:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:33.768 06:01:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.768 06:01:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.768 ************************************ 00:06:33.768 START TEST raid0_resize_test 00:06:33.768 ************************************ 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid_resize_test 0 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=70845 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 70845' 00:06:33.768 Process raid pid: 70845 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 70845 /var/tmp/spdk-raid.sock 00:06:33.768 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 70845 ']' 00:06:33.769 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:33.769 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.769 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:33.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:33.769 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.769 06:01:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 [2024-08-13 06:01:35.593195] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:34.028 [2024-08-13 06:01:35.593785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.028 [2024-08-13 06:01:35.722955] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.028 [2024-08-13 06:01:35.774118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.028 [2024-08-13 06:01:35.816979] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.028 [2024-08-13 06:01:35.817119] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.965 06:01:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.965 06:01:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:06:34.965 06:01:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:34.965 Base_1 00:06:34.965 06:01:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:35.224 Base_2 00:06:35.224 06:01:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:06:35.224 06:01:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:35.484 [2024-08-13 06:01:37.024093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:35.484 [2024-08-13 06:01:37.026098] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:35.484 [2024-08-13 06:01:37.026176] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:35.484 [2024-08-13 06:01:37.026195] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:35.484 [2024-08-13 06:01:37.026558] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:35.484 [2024-08-13 06:01:37.026687] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:35.484 [2024-08-13 06:01:37.026702] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:35.484 [2024-08-13 06:01:37.026841] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.484 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:35.484 [2024-08-13 06:01:37.231729] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.484 [2024-08-13 06:01:37.231855] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:35.484 true 00:06:35.484 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:35.484 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:06:35.743 [2024-08-13 06:01:37.443539] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.743 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:06:35.743 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:06:35.743 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:06:35.743 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:06:35.743 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:06:35.743 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:36.002 [2024-08-13 06:01:37.646970] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:36.002 [2024-08-13 06:01:37.647104] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:36.002 [2024-08-13 06:01:37.647165] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:36.002 true 00:06:36.002 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:36.002 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:06:36.261 [2024-08-13 06:01:37.854762] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 70845 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 70845 ']' 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 70845 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70845 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70845' 00:06:36.261 killing process with pid 70845 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 70845 00:06:36.261 [2024-08-13 06:01:37.907002] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:36.261 [2024-08-13 06:01:37.907191] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.261 06:01:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 70845 00:06:36.261 [2024-08-13 06:01:37.907277] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:36.261 [2024-08-13 06:01:37.907322] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:36.261 [2024-08-13 06:01:37.908884] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.557 06:01:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:06:36.557 00:06:36.557 real 0m2.635s 00:06:36.557 user 0m4.007s 00:06:36.557 sys 0m0.422s 00:06:36.557 06:01:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.557 ************************************ 00:06:36.557 END TEST raid0_resize_test 00:06:36.557 ************************************ 00:06:36.557 06:01:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.557 06:01:38 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:06:36.557 06:01:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:36.557 06:01:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.557 06:01:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.557 ************************************ 00:06:36.557 START TEST raid1_resize_test 00:06:36.557 ************************************ 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1121 -- # raid_resize_test 1 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=70911 00:06:36.557 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 70911' 00:06:36.558 Process raid pid: 70911 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 70911 /var/tmp/spdk-raid.sock 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@827 -- # '[' -z 70911 ']' 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:36.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.558 06:01:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.558 [2024-08-13 06:01:38.295911] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:36.558 [2024-08-13 06:01:38.296138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.823 [2024-08-13 06:01:38.441561] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.823 [2024-08-13 06:01:38.491114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.823 [2024-08-13 06:01:38.533741] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.823 [2024-08-13 06:01:38.533860] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.390 06:01:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.390 06:01:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # return 0 00:06:37.390 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:37.649 Base_1 00:06:37.649 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:37.908 Base_2 00:06:37.908 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:06:37.908 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:06:38.166 [2024-08-13 06:01:39.712809] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:38.166 [2024-08-13 06:01:39.714603] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:38.166 [2024-08-13 06:01:39.714682] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:38.166 [2024-08-13 06:01:39.714691] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:38.166 [2024-08-13 06:01:39.715000] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:38.166 [2024-08-13 06:01:39.715125] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:38.166 [2024-08-13 06:01:39.715139] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:38.166 [2024-08-13 06:01:39.715295] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.166 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:38.166 [2024-08-13 06:01:39.912480] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.166 [2024-08-13 06:01:39.912514] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:38.166 true 00:06:38.166 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:38.166 06:01:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:06:38.425 [2024-08-13 06:01:40.112340] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.425 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:06:38.425 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:06:38.425 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:06:38.425 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:06:38.425 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:06:38.425 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:38.694 [2024-08-13 06:01:40.319798] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.694 [2024-08-13 06:01:40.319905] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:38.694 [2024-08-13 06:01:40.319939] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:38.694 true 00:06:38.694 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:38.694 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:06:38.954 [2024-08-13 06:01:40.523613] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 70911 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@946 -- # '[' -z 70911 ']' 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # kill -0 70911 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@951 -- # uname 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70911 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70911' 00:06:38.954 killing process with pid 70911 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@965 -- # kill 70911 00:06:38.954 [2024-08-13 06:01:40.578651] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.954 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # wait 70911 00:06:38.954 [2024-08-13 06:01:40.578854] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.954 [2024-08-13 06:01:40.579316] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.954 [2024-08-13 06:01:40.579396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:38.954 [2024-08-13 06:01:40.580558] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.213 06:01:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:06:39.213 00:06:39.213 real 0m2.606s 00:06:39.213 user 0m3.916s 00:06:39.213 sys 0m0.448s 00:06:39.213 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.213 06:01:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.213 ************************************ 00:06:39.213 END TEST raid1_resize_test 00:06:39.213 ************************************ 00:06:39.213 06:01:40 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:06:39.213 06:01:40 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:06:39.213 06:01:40 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:39.213 06:01:40 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:39.213 06:01:40 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.213 06:01:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.213 ************************************ 00:06:39.213 START TEST raid_state_function_test 00:06:39.213 ************************************ 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:39.213 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=70981 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 70981' 00:06:39.214 Process raid pid: 70981 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 70981 /var/tmp/spdk-raid.sock 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 70981 ']' 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:39.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.214 06:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.214 [2024-08-13 06:01:40.977179] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:39.214 [2024-08-13 06:01:40.977373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.473 [2024-08-13 06:01:41.122358] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.473 [2024-08-13 06:01:41.167521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.473 [2024-08-13 06:01:41.209886] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.473 [2024-08-13 06:01:41.209917] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.040 06:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.040 06:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:06:40.040 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:40.300 [2024-08-13 06:01:41.973820] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:40.300 [2024-08-13 06:01:41.973968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:40.300 [2024-08-13 06:01:41.973985] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:40.300 [2024-08-13 06:01:41.974009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:40.300 06:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.560 06:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:40.560 "name": "Existed_Raid", 00:06:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.560 "strip_size_kb": 64, 00:06:40.560 "state": "configuring", 00:06:40.560 "raid_level": "raid0", 00:06:40.560 "superblock": false, 00:06:40.560 "num_base_bdevs": 2, 00:06:40.560 "num_base_bdevs_discovered": 0, 00:06:40.560 "num_base_bdevs_operational": 2, 00:06:40.560 "base_bdevs_list": [ 00:06:40.560 { 00:06:40.560 "name": "BaseBdev1", 00:06:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.560 "is_configured": false, 00:06:40.560 "data_offset": 0, 00:06:40.560 "data_size": 0 00:06:40.560 }, 00:06:40.560 { 00:06:40.560 "name": "BaseBdev2", 00:06:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.560 "is_configured": false, 00:06:40.560 "data_offset": 0, 00:06:40.560 "data_size": 0 00:06:40.560 } 00:06:40.560 ] 00:06:40.560 }' 00:06:40.560 06:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:40.560 06:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.128 06:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:41.128 [2024-08-13 06:01:42.908072] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:41.128 [2024-08-13 06:01:42.908209] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:41.386 06:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:41.386 [2024-08-13 06:01:43.079775] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:41.386 [2024-08-13 06:01:43.079898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:41.386 [2024-08-13 06:01:43.079941] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:41.386 [2024-08-13 06:01:43.079962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:41.386 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:41.646 [2024-08-13 06:01:43.260306] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:41.646 BaseBdev1 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:41.646 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:41.905 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:41.905 [ 00:06:41.905 { 00:06:41.905 "name": "BaseBdev1", 00:06:41.905 "aliases": [ 00:06:41.905 "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10" 00:06:41.905 ], 00:06:41.905 "product_name": "Malloc disk", 00:06:41.905 "block_size": 512, 00:06:41.905 "num_blocks": 65536, 00:06:41.905 "uuid": "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10", 00:06:41.905 "assigned_rate_limits": { 00:06:41.905 "rw_ios_per_sec": 0, 00:06:41.905 "rw_mbytes_per_sec": 0, 00:06:41.905 "r_mbytes_per_sec": 0, 00:06:41.905 "w_mbytes_per_sec": 0 00:06:41.905 }, 00:06:41.905 "claimed": true, 00:06:41.905 "claim_type": "exclusive_write", 00:06:41.905 "zoned": false, 00:06:41.905 "supported_io_types": { 00:06:41.905 "read": true, 00:06:41.905 "write": true, 00:06:41.905 "unmap": true, 00:06:41.905 "flush": true, 00:06:41.905 "reset": true, 00:06:41.905 "nvme_admin": false, 00:06:41.905 "nvme_io": false, 00:06:41.905 "nvme_io_md": false, 00:06:41.905 "write_zeroes": true, 00:06:41.905 "zcopy": true, 00:06:41.905 "get_zone_info": false, 00:06:41.905 "zone_management": false, 00:06:41.905 "zone_append": false, 00:06:41.905 "compare": false, 00:06:41.905 "compare_and_write": false, 00:06:41.905 "abort": true, 00:06:41.905 "seek_hole": false, 00:06:41.905 "seek_data": false, 00:06:41.905 "copy": true, 00:06:41.905 "nvme_iov_md": false 00:06:41.905 }, 00:06:41.905 "memory_domains": [ 00:06:41.905 { 00:06:41.905 "dma_device_id": "system", 00:06:41.905 "dma_device_type": 1 00:06:41.905 }, 00:06:41.905 { 00:06:41.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.905 "dma_device_type": 2 00:06:41.905 } 00:06:41.905 ], 00:06:41.905 "driver_specific": {} 00:06:41.905 } 00:06:41.905 ] 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.164 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:42.164 "name": "Existed_Raid", 00:06:42.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.164 "strip_size_kb": 64, 00:06:42.164 "state": "configuring", 00:06:42.164 "raid_level": "raid0", 00:06:42.164 "superblock": false, 00:06:42.164 "num_base_bdevs": 2, 00:06:42.164 "num_base_bdevs_discovered": 1, 00:06:42.165 "num_base_bdevs_operational": 2, 00:06:42.165 "base_bdevs_list": [ 00:06:42.165 { 00:06:42.165 "name": "BaseBdev1", 00:06:42.165 "uuid": "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10", 00:06:42.165 "is_configured": true, 00:06:42.165 "data_offset": 0, 00:06:42.165 "data_size": 65536 00:06:42.165 }, 00:06:42.165 { 00:06:42.165 "name": "BaseBdev2", 00:06:42.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.165 "is_configured": false, 00:06:42.165 "data_offset": 0, 00:06:42.165 "data_size": 0 00:06:42.165 } 00:06:42.165 ] 00:06:42.165 }' 00:06:42.165 06:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:42.165 06:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.734 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:42.993 [2024-08-13 06:01:44.622083] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:42.993 [2024-08-13 06:01:44.622212] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:42.993 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:43.252 [2024-08-13 06:01:44.793829] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:43.252 [2024-08-13 06:01:44.795720] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:43.252 [2024-08-13 06:01:44.795812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:43.252 06:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:43.252 06:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:43.252 "name": "Existed_Raid", 00:06:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.252 "strip_size_kb": 64, 00:06:43.252 "state": "configuring", 00:06:43.252 "raid_level": "raid0", 00:06:43.252 "superblock": false, 00:06:43.252 "num_base_bdevs": 2, 00:06:43.252 "num_base_bdevs_discovered": 1, 00:06:43.252 "num_base_bdevs_operational": 2, 00:06:43.252 "base_bdevs_list": [ 00:06:43.252 { 00:06:43.252 "name": "BaseBdev1", 00:06:43.252 "uuid": "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10", 00:06:43.252 "is_configured": true, 00:06:43.252 "data_offset": 0, 00:06:43.252 "data_size": 65536 00:06:43.252 }, 00:06:43.252 { 00:06:43.252 "name": "BaseBdev2", 00:06:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.252 "is_configured": false, 00:06:43.252 "data_offset": 0, 00:06:43.252 "data_size": 0 00:06:43.252 } 00:06:43.252 ] 00:06:43.252 }' 00:06:43.252 06:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:43.252 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.819 06:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:44.078 [2024-08-13 06:01:45.743548] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:44.078 [2024-08-13 06:01:45.743590] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:44.078 [2024-08-13 06:01:45.743606] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:44.078 [2024-08-13 06:01:45.743891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:44.078 [2024-08-13 06:01:45.744060] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:44.078 [2024-08-13 06:01:45.744072] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:44.078 [2024-08-13 06:01:45.744310] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.078 BaseBdev2 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:44.078 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:44.337 06:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:44.595 [ 00:06:44.595 { 00:06:44.595 "name": "BaseBdev2", 00:06:44.595 "aliases": [ 00:06:44.595 "40e8f5a1-b876-453d-b98d-2a558fb10637" 00:06:44.595 ], 00:06:44.595 "product_name": "Malloc disk", 00:06:44.595 "block_size": 512, 00:06:44.595 "num_blocks": 65536, 00:06:44.595 "uuid": "40e8f5a1-b876-453d-b98d-2a558fb10637", 00:06:44.595 "assigned_rate_limits": { 00:06:44.595 "rw_ios_per_sec": 0, 00:06:44.595 "rw_mbytes_per_sec": 0, 00:06:44.595 "r_mbytes_per_sec": 0, 00:06:44.595 "w_mbytes_per_sec": 0 00:06:44.595 }, 00:06:44.595 "claimed": true, 00:06:44.595 "claim_type": "exclusive_write", 00:06:44.595 "zoned": false, 00:06:44.595 "supported_io_types": { 00:06:44.595 "read": true, 00:06:44.595 "write": true, 00:06:44.595 "unmap": true, 00:06:44.595 "flush": true, 00:06:44.595 "reset": true, 00:06:44.595 "nvme_admin": false, 00:06:44.595 "nvme_io": false, 00:06:44.595 "nvme_io_md": false, 00:06:44.595 "write_zeroes": true, 00:06:44.595 "zcopy": true, 00:06:44.595 "get_zone_info": false, 00:06:44.595 "zone_management": false, 00:06:44.595 "zone_append": false, 00:06:44.595 "compare": false, 00:06:44.595 "compare_and_write": false, 00:06:44.595 "abort": true, 00:06:44.595 "seek_hole": false, 00:06:44.595 "seek_data": false, 00:06:44.595 "copy": true, 00:06:44.595 "nvme_iov_md": false 00:06:44.595 }, 00:06:44.595 "memory_domains": [ 00:06:44.595 { 00:06:44.595 "dma_device_id": "system", 00:06:44.595 "dma_device_type": 1 00:06:44.595 }, 00:06:44.595 { 00:06:44.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.595 "dma_device_type": 2 00:06:44.595 } 00:06:44.595 ], 00:06:44.595 "driver_specific": {} 00:06:44.595 } 00:06:44.595 ] 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:44.595 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.596 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:44.596 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:44.596 "name": "Existed_Raid", 00:06:44.596 "uuid": "84fc13db-53ae-4518-b002-b73cf3bcc58a", 00:06:44.596 "strip_size_kb": 64, 00:06:44.596 "state": "online", 00:06:44.596 "raid_level": "raid0", 00:06:44.596 "superblock": false, 00:06:44.596 "num_base_bdevs": 2, 00:06:44.596 "num_base_bdevs_discovered": 2, 00:06:44.596 "num_base_bdevs_operational": 2, 00:06:44.596 "base_bdevs_list": [ 00:06:44.596 { 00:06:44.596 "name": "BaseBdev1", 00:06:44.596 "uuid": "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10", 00:06:44.596 "is_configured": true, 00:06:44.596 "data_offset": 0, 00:06:44.596 "data_size": 65536 00:06:44.596 }, 00:06:44.596 { 00:06:44.596 "name": "BaseBdev2", 00:06:44.596 "uuid": "40e8f5a1-b876-453d-b98d-2a558fb10637", 00:06:44.596 "is_configured": true, 00:06:44.596 "data_offset": 0, 00:06:44.596 "data_size": 65536 00:06:44.596 } 00:06:44.596 ] 00:06:44.596 }' 00:06:44.596 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:44.596 06:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:45.164 06:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:45.423 [2024-08-13 06:01:47.065621] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.423 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:45.423 "name": "Existed_Raid", 00:06:45.423 "aliases": [ 00:06:45.423 "84fc13db-53ae-4518-b002-b73cf3bcc58a" 00:06:45.423 ], 00:06:45.423 "product_name": "Raid Volume", 00:06:45.423 "block_size": 512, 00:06:45.423 "num_blocks": 131072, 00:06:45.423 "uuid": "84fc13db-53ae-4518-b002-b73cf3bcc58a", 00:06:45.423 "assigned_rate_limits": { 00:06:45.423 "rw_ios_per_sec": 0, 00:06:45.423 "rw_mbytes_per_sec": 0, 00:06:45.423 "r_mbytes_per_sec": 0, 00:06:45.423 "w_mbytes_per_sec": 0 00:06:45.423 }, 00:06:45.423 "claimed": false, 00:06:45.423 "zoned": false, 00:06:45.423 "supported_io_types": { 00:06:45.423 "read": true, 00:06:45.423 "write": true, 00:06:45.423 "unmap": true, 00:06:45.423 "flush": true, 00:06:45.423 "reset": true, 00:06:45.423 "nvme_admin": false, 00:06:45.423 "nvme_io": false, 00:06:45.423 "nvme_io_md": false, 00:06:45.423 "write_zeroes": true, 00:06:45.423 "zcopy": false, 00:06:45.423 "get_zone_info": false, 00:06:45.423 "zone_management": false, 00:06:45.423 "zone_append": false, 00:06:45.423 "compare": false, 00:06:45.423 "compare_and_write": false, 00:06:45.423 "abort": false, 00:06:45.423 "seek_hole": false, 00:06:45.423 "seek_data": false, 00:06:45.423 "copy": false, 00:06:45.423 "nvme_iov_md": false 00:06:45.423 }, 00:06:45.423 "memory_domains": [ 00:06:45.423 { 00:06:45.423 "dma_device_id": "system", 00:06:45.423 "dma_device_type": 1 00:06:45.423 }, 00:06:45.423 { 00:06:45.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.423 "dma_device_type": 2 00:06:45.423 }, 00:06:45.423 { 00:06:45.423 "dma_device_id": "system", 00:06:45.423 "dma_device_type": 1 00:06:45.423 }, 00:06:45.423 { 00:06:45.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.423 "dma_device_type": 2 00:06:45.423 } 00:06:45.423 ], 00:06:45.423 "driver_specific": { 00:06:45.423 "raid": { 00:06:45.423 "uuid": "84fc13db-53ae-4518-b002-b73cf3bcc58a", 00:06:45.423 "strip_size_kb": 64, 00:06:45.423 "state": "online", 00:06:45.423 "raid_level": "raid0", 00:06:45.423 "superblock": false, 00:06:45.423 "num_base_bdevs": 2, 00:06:45.423 "num_base_bdevs_discovered": 2, 00:06:45.423 "num_base_bdevs_operational": 2, 00:06:45.423 "base_bdevs_list": [ 00:06:45.423 { 00:06:45.423 "name": "BaseBdev1", 00:06:45.423 "uuid": "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10", 00:06:45.423 "is_configured": true, 00:06:45.423 "data_offset": 0, 00:06:45.423 "data_size": 65536 00:06:45.424 }, 00:06:45.424 { 00:06:45.424 "name": "BaseBdev2", 00:06:45.424 "uuid": "40e8f5a1-b876-453d-b98d-2a558fb10637", 00:06:45.424 "is_configured": true, 00:06:45.424 "data_offset": 0, 00:06:45.424 "data_size": 65536 00:06:45.424 } 00:06:45.424 ] 00:06:45.424 } 00:06:45.424 } 00:06:45.424 }' 00:06:45.424 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:45.424 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:45.424 BaseBdev2' 00:06:45.424 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:45.424 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:45.424 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:45.683 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:45.683 "name": "BaseBdev1", 00:06:45.683 "aliases": [ 00:06:45.683 "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10" 00:06:45.683 ], 00:06:45.683 "product_name": "Malloc disk", 00:06:45.683 "block_size": 512, 00:06:45.683 "num_blocks": 65536, 00:06:45.683 "uuid": "0adce3d4-b3f5-4661-b5a9-9f1dd823ac10", 00:06:45.683 "assigned_rate_limits": { 00:06:45.683 "rw_ios_per_sec": 0, 00:06:45.683 "rw_mbytes_per_sec": 0, 00:06:45.683 "r_mbytes_per_sec": 0, 00:06:45.683 "w_mbytes_per_sec": 0 00:06:45.683 }, 00:06:45.683 "claimed": true, 00:06:45.683 "claim_type": "exclusive_write", 00:06:45.683 "zoned": false, 00:06:45.683 "supported_io_types": { 00:06:45.683 "read": true, 00:06:45.683 "write": true, 00:06:45.683 "unmap": true, 00:06:45.683 "flush": true, 00:06:45.683 "reset": true, 00:06:45.683 "nvme_admin": false, 00:06:45.683 "nvme_io": false, 00:06:45.683 "nvme_io_md": false, 00:06:45.683 "write_zeroes": true, 00:06:45.683 "zcopy": true, 00:06:45.683 "get_zone_info": false, 00:06:45.683 "zone_management": false, 00:06:45.683 "zone_append": false, 00:06:45.683 "compare": false, 00:06:45.683 "compare_and_write": false, 00:06:45.683 "abort": true, 00:06:45.683 "seek_hole": false, 00:06:45.683 "seek_data": false, 00:06:45.683 "copy": true, 00:06:45.683 "nvme_iov_md": false 00:06:45.683 }, 00:06:45.683 "memory_domains": [ 00:06:45.683 { 00:06:45.683 "dma_device_id": "system", 00:06:45.683 "dma_device_type": 1 00:06:45.683 }, 00:06:45.683 { 00:06:45.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.683 "dma_device_type": 2 00:06:45.683 } 00:06:45.683 ], 00:06:45.683 "driver_specific": {} 00:06:45.683 }' 00:06:45.683 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:45.683 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:45.683 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:45.683 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:45.941 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.201 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:46.201 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:46.201 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:46.201 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:46.201 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:46.201 "name": "BaseBdev2", 00:06:46.201 "aliases": [ 00:06:46.201 "40e8f5a1-b876-453d-b98d-2a558fb10637" 00:06:46.201 ], 00:06:46.201 "product_name": "Malloc disk", 00:06:46.201 "block_size": 512, 00:06:46.201 "num_blocks": 65536, 00:06:46.201 "uuid": "40e8f5a1-b876-453d-b98d-2a558fb10637", 00:06:46.201 "assigned_rate_limits": { 00:06:46.201 "rw_ios_per_sec": 0, 00:06:46.201 "rw_mbytes_per_sec": 0, 00:06:46.201 "r_mbytes_per_sec": 0, 00:06:46.201 "w_mbytes_per_sec": 0 00:06:46.201 }, 00:06:46.201 "claimed": true, 00:06:46.201 "claim_type": "exclusive_write", 00:06:46.201 "zoned": false, 00:06:46.201 "supported_io_types": { 00:06:46.201 "read": true, 00:06:46.201 "write": true, 00:06:46.201 "unmap": true, 00:06:46.201 "flush": true, 00:06:46.201 "reset": true, 00:06:46.201 "nvme_admin": false, 00:06:46.201 "nvme_io": false, 00:06:46.201 "nvme_io_md": false, 00:06:46.201 "write_zeroes": true, 00:06:46.201 "zcopy": true, 00:06:46.201 "get_zone_info": false, 00:06:46.201 "zone_management": false, 00:06:46.201 "zone_append": false, 00:06:46.201 "compare": false, 00:06:46.201 "compare_and_write": false, 00:06:46.201 "abort": true, 00:06:46.201 "seek_hole": false, 00:06:46.201 "seek_data": false, 00:06:46.201 "copy": true, 00:06:46.201 "nvme_iov_md": false 00:06:46.201 }, 00:06:46.201 "memory_domains": [ 00:06:46.201 { 00:06:46.201 "dma_device_id": "system", 00:06:46.201 "dma_device_type": 1 00:06:46.201 }, 00:06:46.201 { 00:06:46.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.201 "dma_device_type": 2 00:06:46.201 } 00:06:46.201 ], 00:06:46.201 "driver_specific": {} 00:06:46.201 }' 00:06:46.201 06:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:46.459 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:46.717 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:46.717 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.717 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.717 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:46.717 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:46.976 [2024-08-13 06:01:48.551076] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:46.976 [2024-08-13 06:01:48.551111] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.976 [2024-08-13 06:01:48.551164] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.976 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:46.976 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:46.976 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:46.976 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:46.977 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.235 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:47.235 "name": "Existed_Raid", 00:06:47.235 "uuid": "84fc13db-53ae-4518-b002-b73cf3bcc58a", 00:06:47.235 "strip_size_kb": 64, 00:06:47.235 "state": "offline", 00:06:47.235 "raid_level": "raid0", 00:06:47.235 "superblock": false, 00:06:47.235 "num_base_bdevs": 2, 00:06:47.236 "num_base_bdevs_discovered": 1, 00:06:47.236 "num_base_bdevs_operational": 1, 00:06:47.236 "base_bdevs_list": [ 00:06:47.236 { 00:06:47.236 "name": null, 00:06:47.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.236 "is_configured": false, 00:06:47.236 "data_offset": 0, 00:06:47.236 "data_size": 65536 00:06:47.236 }, 00:06:47.236 { 00:06:47.236 "name": "BaseBdev2", 00:06:47.236 "uuid": "40e8f5a1-b876-453d-b98d-2a558fb10637", 00:06:47.236 "is_configured": true, 00:06:47.236 "data_offset": 0, 00:06:47.236 "data_size": 65536 00:06:47.236 } 00:06:47.236 ] 00:06:47.236 }' 00:06:47.236 06:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:47.236 06:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.803 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:47.803 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:47.803 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:47.803 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:48.092 [2024-08-13 06:01:49.815211] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:48.092 [2024-08-13 06:01:49.815342] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:48.092 06:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 70981 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 70981 ']' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 70981 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70981 00:06:48.352 killing process with pid 70981 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70981' 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 70981 00:06:48.352 [2024-08-13 06:01:50.085475] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.352 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 70981 00:06:48.352 [2024-08-13 06:01:50.086483] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:06:48.611 00:06:48.611 real 0m9.436s 00:06:48.611 user 0m16.916s 00:06:48.611 sys 0m1.422s 00:06:48.611 ************************************ 00:06:48.611 END TEST raid_state_function_test 00:06:48.611 ************************************ 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 06:01:50 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:48.611 06:01:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:48.611 06:01:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.611 06:01:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 ************************************ 00:06:48.611 START TEST raid_state_function_test_sb 00:06:48.611 ************************************ 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:48.611 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:06:48.870 Process raid pid: 71321 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=71321 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 71321' 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 71321 /var/tmp/spdk-raid.sock 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 71321 ']' 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:48.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.870 06:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.870 [2024-08-13 06:01:50.486464] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:48.870 [2024-08-13 06:01:50.486735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.870 [2024-08-13 06:01:50.632979] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.129 [2024-08-13 06:01:50.679659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.129 [2024-08-13 06:01:50.722454] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.129 [2024-08-13 06:01:50.722577] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.697 06:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.697 06:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:06:49.697 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:49.955 [2024-08-13 06:01:51.502588] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:49.955 [2024-08-13 06:01:51.502723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:49.955 [2024-08-13 06:01:51.502756] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.955 [2024-08-13 06:01:51.502777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:49.955 "name": "Existed_Raid", 00:06:49.955 "uuid": "7da3628d-f4b4-4639-8622-a97bdca24ea3", 00:06:49.955 "strip_size_kb": 64, 00:06:49.955 "state": "configuring", 00:06:49.955 "raid_level": "raid0", 00:06:49.955 "superblock": true, 00:06:49.955 "num_base_bdevs": 2, 00:06:49.955 "num_base_bdevs_discovered": 0, 00:06:49.955 "num_base_bdevs_operational": 2, 00:06:49.955 "base_bdevs_list": [ 00:06:49.955 { 00:06:49.955 "name": "BaseBdev1", 00:06:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.955 "is_configured": false, 00:06:49.955 "data_offset": 0, 00:06:49.955 "data_size": 0 00:06:49.955 }, 00:06:49.955 { 00:06:49.955 "name": "BaseBdev2", 00:06:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.955 "is_configured": false, 00:06:49.955 "data_offset": 0, 00:06:49.955 "data_size": 0 00:06:49.955 } 00:06:49.955 ] 00:06:49.955 }' 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:49.955 06:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.521 06:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:50.779 [2024-08-13 06:01:52.488740] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.779 [2024-08-13 06:01:52.488782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:50.779 06:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:51.038 [2024-08-13 06:01:52.688444] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.038 [2024-08-13 06:01:52.688497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.038 [2024-08-13 06:01:52.688517] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.038 [2024-08-13 06:01:52.688525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.038 06:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:51.297 [2024-08-13 06:01:52.900989] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.297 BaseBdev1 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:51.297 06:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:51.555 [ 00:06:51.555 { 00:06:51.555 "name": "BaseBdev1", 00:06:51.555 "aliases": [ 00:06:51.555 "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a" 00:06:51.555 ], 00:06:51.555 "product_name": "Malloc disk", 00:06:51.555 "block_size": 512, 00:06:51.555 "num_blocks": 65536, 00:06:51.555 "uuid": "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a", 00:06:51.555 "assigned_rate_limits": { 00:06:51.555 "rw_ios_per_sec": 0, 00:06:51.555 "rw_mbytes_per_sec": 0, 00:06:51.555 "r_mbytes_per_sec": 0, 00:06:51.555 "w_mbytes_per_sec": 0 00:06:51.555 }, 00:06:51.555 "claimed": true, 00:06:51.555 "claim_type": "exclusive_write", 00:06:51.555 "zoned": false, 00:06:51.555 "supported_io_types": { 00:06:51.555 "read": true, 00:06:51.555 "write": true, 00:06:51.555 "unmap": true, 00:06:51.555 "flush": true, 00:06:51.555 "reset": true, 00:06:51.555 "nvme_admin": false, 00:06:51.555 "nvme_io": false, 00:06:51.555 "nvme_io_md": false, 00:06:51.555 "write_zeroes": true, 00:06:51.555 "zcopy": true, 00:06:51.555 "get_zone_info": false, 00:06:51.555 "zone_management": false, 00:06:51.555 "zone_append": false, 00:06:51.555 "compare": false, 00:06:51.555 "compare_and_write": false, 00:06:51.555 "abort": true, 00:06:51.555 "seek_hole": false, 00:06:51.555 "seek_data": false, 00:06:51.555 "copy": true, 00:06:51.555 "nvme_iov_md": false 00:06:51.555 }, 00:06:51.555 "memory_domains": [ 00:06:51.555 { 00:06:51.555 "dma_device_id": "system", 00:06:51.555 "dma_device_type": 1 00:06:51.555 }, 00:06:51.555 { 00:06:51.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.555 "dma_device_type": 2 00:06:51.555 } 00:06:51.555 ], 00:06:51.555 "driver_specific": {} 00:06:51.555 } 00:06:51.555 ] 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:51.555 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.812 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:51.812 "name": "Existed_Raid", 00:06:51.812 "uuid": "c0ec4231-1ad2-4801-b111-97a29f736a1e", 00:06:51.812 "strip_size_kb": 64, 00:06:51.812 "state": "configuring", 00:06:51.812 "raid_level": "raid0", 00:06:51.812 "superblock": true, 00:06:51.812 "num_base_bdevs": 2, 00:06:51.812 "num_base_bdevs_discovered": 1, 00:06:51.812 "num_base_bdevs_operational": 2, 00:06:51.812 "base_bdevs_list": [ 00:06:51.812 { 00:06:51.812 "name": "BaseBdev1", 00:06:51.812 "uuid": "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a", 00:06:51.812 "is_configured": true, 00:06:51.812 "data_offset": 2048, 00:06:51.812 "data_size": 63488 00:06:51.812 }, 00:06:51.812 { 00:06:51.812 "name": "BaseBdev2", 00:06:51.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.812 "is_configured": false, 00:06:51.812 "data_offset": 0, 00:06:51.812 "data_size": 0 00:06:51.812 } 00:06:51.812 ] 00:06:51.812 }' 00:06:51.812 06:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:51.812 06:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.378 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:52.637 [2024-08-13 06:01:54.210963] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.637 [2024-08-13 06:01:54.211129] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:52.637 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:52.637 [2024-08-13 06:01:54.406695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.637 [2024-08-13 06:01:54.408564] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.637 [2024-08-13 06:01:54.408664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.637 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:52.637 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:52.637 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:52.637 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:52.637 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:52.896 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:52.897 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:52.897 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.897 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:52.897 "name": "Existed_Raid", 00:06:52.897 "uuid": "8b33823c-66e2-4bd5-820d-6bae889d1ef0", 00:06:52.897 "strip_size_kb": 64, 00:06:52.897 "state": "configuring", 00:06:52.897 "raid_level": "raid0", 00:06:52.897 "superblock": true, 00:06:52.897 "num_base_bdevs": 2, 00:06:52.897 "num_base_bdevs_discovered": 1, 00:06:52.897 "num_base_bdevs_operational": 2, 00:06:52.897 "base_bdevs_list": [ 00:06:52.897 { 00:06:52.897 "name": "BaseBdev1", 00:06:52.897 "uuid": "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a", 00:06:52.897 "is_configured": true, 00:06:52.897 "data_offset": 2048, 00:06:52.897 "data_size": 63488 00:06:52.897 }, 00:06:52.897 { 00:06:52.897 "name": "BaseBdev2", 00:06:52.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.897 "is_configured": false, 00:06:52.897 "data_offset": 0, 00:06:52.897 "data_size": 0 00:06:52.897 } 00:06:52.897 ] 00:06:52.897 }' 00:06:52.897 06:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:52.897 06:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.464 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:53.722 [2024-08-13 06:01:55.382614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:53.722 [2024-08-13 06:01:55.382861] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:53.722 [2024-08-13 06:01:55.382884] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:53.722 [2024-08-13 06:01:55.383293] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:53.722 [2024-08-13 06:01:55.383465] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:53.722 [2024-08-13 06:01:55.383486] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:53.722 [2024-08-13 06:01:55.383641] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.722 BaseBdev2 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:06:53.722 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:54.003 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:54.274 [ 00:06:54.274 { 00:06:54.274 "name": "BaseBdev2", 00:06:54.274 "aliases": [ 00:06:54.274 "48d670d4-b835-4212-a3d8-fc3ef1223100" 00:06:54.274 ], 00:06:54.274 "product_name": "Malloc disk", 00:06:54.274 "block_size": 512, 00:06:54.274 "num_blocks": 65536, 00:06:54.274 "uuid": "48d670d4-b835-4212-a3d8-fc3ef1223100", 00:06:54.274 "assigned_rate_limits": { 00:06:54.274 "rw_ios_per_sec": 0, 00:06:54.274 "rw_mbytes_per_sec": 0, 00:06:54.274 "r_mbytes_per_sec": 0, 00:06:54.274 "w_mbytes_per_sec": 0 00:06:54.274 }, 00:06:54.274 "claimed": true, 00:06:54.274 "claim_type": "exclusive_write", 00:06:54.274 "zoned": false, 00:06:54.274 "supported_io_types": { 00:06:54.274 "read": true, 00:06:54.274 "write": true, 00:06:54.274 "unmap": true, 00:06:54.274 "flush": true, 00:06:54.274 "reset": true, 00:06:54.274 "nvme_admin": false, 00:06:54.274 "nvme_io": false, 00:06:54.274 "nvme_io_md": false, 00:06:54.274 "write_zeroes": true, 00:06:54.274 "zcopy": true, 00:06:54.274 "get_zone_info": false, 00:06:54.274 "zone_management": false, 00:06:54.274 "zone_append": false, 00:06:54.274 "compare": false, 00:06:54.274 "compare_and_write": false, 00:06:54.274 "abort": true, 00:06:54.274 "seek_hole": false, 00:06:54.274 "seek_data": false, 00:06:54.274 "copy": true, 00:06:54.275 "nvme_iov_md": false 00:06:54.275 }, 00:06:54.275 "memory_domains": [ 00:06:54.275 { 00:06:54.275 "dma_device_id": "system", 00:06:54.275 "dma_device_type": 1 00:06:54.275 }, 00:06:54.275 { 00:06:54.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.275 "dma_device_type": 2 00:06:54.275 } 00:06:54.275 ], 00:06:54.275 "driver_specific": {} 00:06:54.275 } 00:06:54.275 ] 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.275 06:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.275 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:54.275 "name": "Existed_Raid", 00:06:54.275 "uuid": "8b33823c-66e2-4bd5-820d-6bae889d1ef0", 00:06:54.275 "strip_size_kb": 64, 00:06:54.275 "state": "online", 00:06:54.275 "raid_level": "raid0", 00:06:54.275 "superblock": true, 00:06:54.275 "num_base_bdevs": 2, 00:06:54.275 "num_base_bdevs_discovered": 2, 00:06:54.275 "num_base_bdevs_operational": 2, 00:06:54.275 "base_bdevs_list": [ 00:06:54.275 { 00:06:54.275 "name": "BaseBdev1", 00:06:54.275 "uuid": "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a", 00:06:54.275 "is_configured": true, 00:06:54.275 "data_offset": 2048, 00:06:54.275 "data_size": 63488 00:06:54.275 }, 00:06:54.275 { 00:06:54.275 "name": "BaseBdev2", 00:06:54.275 "uuid": "48d670d4-b835-4212-a3d8-fc3ef1223100", 00:06:54.275 "is_configured": true, 00:06:54.275 "data_offset": 2048, 00:06:54.275 "data_size": 63488 00:06:54.275 } 00:06:54.275 ] 00:06:54.275 }' 00:06:54.275 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:54.275 06:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:54.843 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:55.102 [2024-08-13 06:01:56.716749] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.102 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:55.102 "name": "Existed_Raid", 00:06:55.102 "aliases": [ 00:06:55.102 "8b33823c-66e2-4bd5-820d-6bae889d1ef0" 00:06:55.102 ], 00:06:55.102 "product_name": "Raid Volume", 00:06:55.102 "block_size": 512, 00:06:55.102 "num_blocks": 126976, 00:06:55.102 "uuid": "8b33823c-66e2-4bd5-820d-6bae889d1ef0", 00:06:55.102 "assigned_rate_limits": { 00:06:55.102 "rw_ios_per_sec": 0, 00:06:55.102 "rw_mbytes_per_sec": 0, 00:06:55.102 "r_mbytes_per_sec": 0, 00:06:55.102 "w_mbytes_per_sec": 0 00:06:55.102 }, 00:06:55.102 "claimed": false, 00:06:55.102 "zoned": false, 00:06:55.102 "supported_io_types": { 00:06:55.102 "read": true, 00:06:55.102 "write": true, 00:06:55.102 "unmap": true, 00:06:55.102 "flush": true, 00:06:55.102 "reset": true, 00:06:55.102 "nvme_admin": false, 00:06:55.102 "nvme_io": false, 00:06:55.102 "nvme_io_md": false, 00:06:55.102 "write_zeroes": true, 00:06:55.102 "zcopy": false, 00:06:55.102 "get_zone_info": false, 00:06:55.102 "zone_management": false, 00:06:55.102 "zone_append": false, 00:06:55.102 "compare": false, 00:06:55.102 "compare_and_write": false, 00:06:55.102 "abort": false, 00:06:55.102 "seek_hole": false, 00:06:55.102 "seek_data": false, 00:06:55.102 "copy": false, 00:06:55.102 "nvme_iov_md": false 00:06:55.102 }, 00:06:55.102 "memory_domains": [ 00:06:55.102 { 00:06:55.102 "dma_device_id": "system", 00:06:55.102 "dma_device_type": 1 00:06:55.102 }, 00:06:55.102 { 00:06:55.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.102 "dma_device_type": 2 00:06:55.102 }, 00:06:55.102 { 00:06:55.102 "dma_device_id": "system", 00:06:55.102 "dma_device_type": 1 00:06:55.102 }, 00:06:55.102 { 00:06:55.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.102 "dma_device_type": 2 00:06:55.102 } 00:06:55.102 ], 00:06:55.102 "driver_specific": { 00:06:55.102 "raid": { 00:06:55.102 "uuid": "8b33823c-66e2-4bd5-820d-6bae889d1ef0", 00:06:55.102 "strip_size_kb": 64, 00:06:55.102 "state": "online", 00:06:55.102 "raid_level": "raid0", 00:06:55.102 "superblock": true, 00:06:55.102 "num_base_bdevs": 2, 00:06:55.102 "num_base_bdevs_discovered": 2, 00:06:55.102 "num_base_bdevs_operational": 2, 00:06:55.102 "base_bdevs_list": [ 00:06:55.102 { 00:06:55.102 "name": "BaseBdev1", 00:06:55.102 "uuid": "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a", 00:06:55.102 "is_configured": true, 00:06:55.102 "data_offset": 2048, 00:06:55.102 "data_size": 63488 00:06:55.102 }, 00:06:55.102 { 00:06:55.102 "name": "BaseBdev2", 00:06:55.102 "uuid": "48d670d4-b835-4212-a3d8-fc3ef1223100", 00:06:55.102 "is_configured": true, 00:06:55.102 "data_offset": 2048, 00:06:55.102 "data_size": 63488 00:06:55.102 } 00:06:55.102 ] 00:06:55.102 } 00:06:55.102 } 00:06:55.102 }' 00:06:55.102 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:55.102 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:55.102 BaseBdev2' 00:06:55.102 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:55.102 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:55.102 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:55.360 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:55.360 "name": "BaseBdev1", 00:06:55.360 "aliases": [ 00:06:55.360 "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a" 00:06:55.360 ], 00:06:55.360 "product_name": "Malloc disk", 00:06:55.360 "block_size": 512, 00:06:55.360 "num_blocks": 65536, 00:06:55.360 "uuid": "b90ed1c8-faed-4f13-8bc8-14ef6eac7b4a", 00:06:55.360 "assigned_rate_limits": { 00:06:55.360 "rw_ios_per_sec": 0, 00:06:55.360 "rw_mbytes_per_sec": 0, 00:06:55.360 "r_mbytes_per_sec": 0, 00:06:55.360 "w_mbytes_per_sec": 0 00:06:55.360 }, 00:06:55.360 "claimed": true, 00:06:55.360 "claim_type": "exclusive_write", 00:06:55.360 "zoned": false, 00:06:55.360 "supported_io_types": { 00:06:55.360 "read": true, 00:06:55.360 "write": true, 00:06:55.360 "unmap": true, 00:06:55.360 "flush": true, 00:06:55.360 "reset": true, 00:06:55.360 "nvme_admin": false, 00:06:55.360 "nvme_io": false, 00:06:55.360 "nvme_io_md": false, 00:06:55.360 "write_zeroes": true, 00:06:55.360 "zcopy": true, 00:06:55.360 "get_zone_info": false, 00:06:55.360 "zone_management": false, 00:06:55.360 "zone_append": false, 00:06:55.360 "compare": false, 00:06:55.360 "compare_and_write": false, 00:06:55.360 "abort": true, 00:06:55.360 "seek_hole": false, 00:06:55.360 "seek_data": false, 00:06:55.360 "copy": true, 00:06:55.360 "nvme_iov_md": false 00:06:55.360 }, 00:06:55.360 "memory_domains": [ 00:06:55.360 { 00:06:55.360 "dma_device_id": "system", 00:06:55.360 "dma_device_type": 1 00:06:55.360 }, 00:06:55.360 { 00:06:55.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.360 "dma_device_type": 2 00:06:55.360 } 00:06:55.360 ], 00:06:55.360 "driver_specific": {} 00:06:55.360 }' 00:06:55.361 06:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.361 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.361 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:55.361 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:55.361 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:55.619 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:55.877 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:55.877 "name": "BaseBdev2", 00:06:55.877 "aliases": [ 00:06:55.877 "48d670d4-b835-4212-a3d8-fc3ef1223100" 00:06:55.877 ], 00:06:55.877 "product_name": "Malloc disk", 00:06:55.877 "block_size": 512, 00:06:55.877 "num_blocks": 65536, 00:06:55.877 "uuid": "48d670d4-b835-4212-a3d8-fc3ef1223100", 00:06:55.877 "assigned_rate_limits": { 00:06:55.877 "rw_ios_per_sec": 0, 00:06:55.877 "rw_mbytes_per_sec": 0, 00:06:55.877 "r_mbytes_per_sec": 0, 00:06:55.877 "w_mbytes_per_sec": 0 00:06:55.877 }, 00:06:55.877 "claimed": true, 00:06:55.877 "claim_type": "exclusive_write", 00:06:55.877 "zoned": false, 00:06:55.877 "supported_io_types": { 00:06:55.877 "read": true, 00:06:55.877 "write": true, 00:06:55.877 "unmap": true, 00:06:55.877 "flush": true, 00:06:55.877 "reset": true, 00:06:55.877 "nvme_admin": false, 00:06:55.877 "nvme_io": false, 00:06:55.877 "nvme_io_md": false, 00:06:55.877 "write_zeroes": true, 00:06:55.877 "zcopy": true, 00:06:55.877 "get_zone_info": false, 00:06:55.877 "zone_management": false, 00:06:55.877 "zone_append": false, 00:06:55.877 "compare": false, 00:06:55.877 "compare_and_write": false, 00:06:55.877 "abort": true, 00:06:55.877 "seek_hole": false, 00:06:55.877 "seek_data": false, 00:06:55.877 "copy": true, 00:06:55.877 "nvme_iov_md": false 00:06:55.877 }, 00:06:55.877 "memory_domains": [ 00:06:55.877 { 00:06:55.877 "dma_device_id": "system", 00:06:55.878 "dma_device_type": 1 00:06:55.878 }, 00:06:55.878 { 00:06:55.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.878 "dma_device_type": 2 00:06:55.878 } 00:06:55.878 ], 00:06:55.878 "driver_specific": {} 00:06:55.878 }' 00:06:55.878 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.878 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.878 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:55.878 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:56.136 06:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:56.395 [2024-08-13 06:01:58.090289] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.395 [2024-08-13 06:01:58.090376] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.395 [2024-08-13 06:01:58.090462] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:56.395 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.654 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:56.654 "name": "Existed_Raid", 00:06:56.654 "uuid": "8b33823c-66e2-4bd5-820d-6bae889d1ef0", 00:06:56.654 "strip_size_kb": 64, 00:06:56.654 "state": "offline", 00:06:56.654 "raid_level": "raid0", 00:06:56.654 "superblock": true, 00:06:56.654 "num_base_bdevs": 2, 00:06:56.654 "num_base_bdevs_discovered": 1, 00:06:56.654 "num_base_bdevs_operational": 1, 00:06:56.654 "base_bdevs_list": [ 00:06:56.654 { 00:06:56.654 "name": null, 00:06:56.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.654 "is_configured": false, 00:06:56.654 "data_offset": 2048, 00:06:56.654 "data_size": 63488 00:06:56.654 }, 00:06:56.654 { 00:06:56.654 "name": "BaseBdev2", 00:06:56.654 "uuid": "48d670d4-b835-4212-a3d8-fc3ef1223100", 00:06:56.654 "is_configured": true, 00:06:56.654 "data_offset": 2048, 00:06:56.654 "data_size": 63488 00:06:56.654 } 00:06:56.654 ] 00:06:56.654 }' 00:06:56.654 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:56.654 06:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.221 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:57.221 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:57.221 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:57.221 06:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:57.480 [2024-08-13 06:01:59.231714] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.480 [2024-08-13 06:01:59.231854] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.480 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 71321 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 71321 ']' 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 71321 00:06:57.739 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71321 00:06:57.999 killing process with pid 71321 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71321' 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 71321 00:06:57.999 [2024-08-13 06:01:59.556082] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 71321 00:06:57.999 [2024-08-13 06:01:59.557105] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:06:57.999 00:06:57.999 real 0m9.397s 00:06:57.999 user 0m16.813s 00:06:57.999 sys 0m1.485s 00:06:57.999 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.258 06:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.258 ************************************ 00:06:58.258 END TEST raid_state_function_test_sb 00:06:58.258 ************************************ 00:06:58.258 06:01:59 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:58.258 06:01:59 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:58.258 06:01:59 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.258 06:01:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.258 ************************************ 00:06:58.258 START TEST raid_superblock_test 00:06:58.258 ************************************ 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=71660 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 71660 /var/tmp/spdk-raid.sock 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 71660 ']' 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.258 06:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.259 [2024-08-13 06:01:59.940576] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:06:58.259 [2024-08-13 06:01:59.941515] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71660 ] 00:06:58.517 [2024-08-13 06:02:00.091527] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.517 [2024-08-13 06:02:00.140227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.517 [2024-08-13 06:02:00.182772] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.517 [2024-08-13 06:02:00.182810] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:59.086 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:59.345 malloc1 00:06:59.345 06:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:59.604 [2024-08-13 06:02:01.187116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:59.604 [2024-08-13 06:02:01.187270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.604 [2024-08-13 06:02:01.187315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:59.604 [2024-08-13 06:02:01.187345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.604 [2024-08-13 06:02:01.189533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.604 [2024-08-13 06:02:01.189620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:59.604 pt1 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:59.604 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:59.862 malloc2 00:06:59.862 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:59.862 [2024-08-13 06:02:01.607338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:59.862 [2024-08-13 06:02:01.607491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.862 [2024-08-13 06:02:01.607530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:59.862 [2024-08-13 06:02:01.607557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.862 [2024-08-13 06:02:01.609711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.862 [2024-08-13 06:02:01.609790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:59.862 pt2 00:06:59.862 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:06:59.862 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:06:59.862 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:00.121 [2024-08-13 06:02:01.811066] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:00.121 [2024-08-13 06:02:01.812985] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:00.121 [2024-08-13 06:02:01.813237] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:00.121 [2024-08-13 06:02:01.813286] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.121 [2024-08-13 06:02:01.813592] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:00.121 [2024-08-13 06:02:01.813747] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:00.121 [2024-08-13 06:02:01.813760] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:00.121 [2024-08-13 06:02:01.813915] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:00.121 06:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.381 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:00.381 "name": "raid_bdev1", 00:07:00.381 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:00.381 "strip_size_kb": 64, 00:07:00.381 "state": "online", 00:07:00.381 "raid_level": "raid0", 00:07:00.381 "superblock": true, 00:07:00.381 "num_base_bdevs": 2, 00:07:00.381 "num_base_bdevs_discovered": 2, 00:07:00.381 "num_base_bdevs_operational": 2, 00:07:00.381 "base_bdevs_list": [ 00:07:00.381 { 00:07:00.381 "name": "pt1", 00:07:00.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.381 "is_configured": true, 00:07:00.381 "data_offset": 2048, 00:07:00.381 "data_size": 63488 00:07:00.381 }, 00:07:00.381 { 00:07:00.381 "name": "pt2", 00:07:00.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.381 "is_configured": true, 00:07:00.381 "data_offset": 2048, 00:07:00.381 "data_size": 63488 00:07:00.381 } 00:07:00.381 ] 00:07:00.381 }' 00:07:00.381 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:00.381 06:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:00.948 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:01.208 [2024-08-13 06:02:02.789626] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.208 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:01.208 "name": "raid_bdev1", 00:07:01.208 "aliases": [ 00:07:01.208 "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef" 00:07:01.208 ], 00:07:01.208 "product_name": "Raid Volume", 00:07:01.208 "block_size": 512, 00:07:01.208 "num_blocks": 126976, 00:07:01.208 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:01.208 "assigned_rate_limits": { 00:07:01.208 "rw_ios_per_sec": 0, 00:07:01.208 "rw_mbytes_per_sec": 0, 00:07:01.208 "r_mbytes_per_sec": 0, 00:07:01.208 "w_mbytes_per_sec": 0 00:07:01.208 }, 00:07:01.208 "claimed": false, 00:07:01.208 "zoned": false, 00:07:01.208 "supported_io_types": { 00:07:01.208 "read": true, 00:07:01.208 "write": true, 00:07:01.208 "unmap": true, 00:07:01.208 "flush": true, 00:07:01.208 "reset": true, 00:07:01.208 "nvme_admin": false, 00:07:01.208 "nvme_io": false, 00:07:01.208 "nvme_io_md": false, 00:07:01.208 "write_zeroes": true, 00:07:01.208 "zcopy": false, 00:07:01.208 "get_zone_info": false, 00:07:01.208 "zone_management": false, 00:07:01.208 "zone_append": false, 00:07:01.208 "compare": false, 00:07:01.208 "compare_and_write": false, 00:07:01.208 "abort": false, 00:07:01.208 "seek_hole": false, 00:07:01.208 "seek_data": false, 00:07:01.208 "copy": false, 00:07:01.208 "nvme_iov_md": false 00:07:01.208 }, 00:07:01.208 "memory_domains": [ 00:07:01.208 { 00:07:01.208 "dma_device_id": "system", 00:07:01.208 "dma_device_type": 1 00:07:01.208 }, 00:07:01.208 { 00:07:01.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.208 "dma_device_type": 2 00:07:01.208 }, 00:07:01.208 { 00:07:01.208 "dma_device_id": "system", 00:07:01.208 "dma_device_type": 1 00:07:01.209 }, 00:07:01.209 { 00:07:01.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.209 "dma_device_type": 2 00:07:01.209 } 00:07:01.209 ], 00:07:01.209 "driver_specific": { 00:07:01.209 "raid": { 00:07:01.209 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:01.209 "strip_size_kb": 64, 00:07:01.209 "state": "online", 00:07:01.209 "raid_level": "raid0", 00:07:01.209 "superblock": true, 00:07:01.209 "num_base_bdevs": 2, 00:07:01.209 "num_base_bdevs_discovered": 2, 00:07:01.209 "num_base_bdevs_operational": 2, 00:07:01.209 "base_bdevs_list": [ 00:07:01.209 { 00:07:01.209 "name": "pt1", 00:07:01.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.209 "is_configured": true, 00:07:01.209 "data_offset": 2048, 00:07:01.209 "data_size": 63488 00:07:01.209 }, 00:07:01.209 { 00:07:01.209 "name": "pt2", 00:07:01.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.209 "is_configured": true, 00:07:01.209 "data_offset": 2048, 00:07:01.209 "data_size": 63488 00:07:01.209 } 00:07:01.209 ] 00:07:01.209 } 00:07:01.209 } 00:07:01.209 }' 00:07:01.209 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.209 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:01.209 pt2' 00:07:01.209 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:01.209 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:01.209 06:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:01.468 "name": "pt1", 00:07:01.468 "aliases": [ 00:07:01.468 "00000000-0000-0000-0000-000000000001" 00:07:01.468 ], 00:07:01.468 "product_name": "passthru", 00:07:01.468 "block_size": 512, 00:07:01.468 "num_blocks": 65536, 00:07:01.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.468 "assigned_rate_limits": { 00:07:01.468 "rw_ios_per_sec": 0, 00:07:01.468 "rw_mbytes_per_sec": 0, 00:07:01.468 "r_mbytes_per_sec": 0, 00:07:01.468 "w_mbytes_per_sec": 0 00:07:01.468 }, 00:07:01.468 "claimed": true, 00:07:01.468 "claim_type": "exclusive_write", 00:07:01.468 "zoned": false, 00:07:01.468 "supported_io_types": { 00:07:01.468 "read": true, 00:07:01.468 "write": true, 00:07:01.468 "unmap": true, 00:07:01.468 "flush": true, 00:07:01.468 "reset": true, 00:07:01.468 "nvme_admin": false, 00:07:01.468 "nvme_io": false, 00:07:01.468 "nvme_io_md": false, 00:07:01.468 "write_zeroes": true, 00:07:01.468 "zcopy": true, 00:07:01.468 "get_zone_info": false, 00:07:01.468 "zone_management": false, 00:07:01.468 "zone_append": false, 00:07:01.468 "compare": false, 00:07:01.468 "compare_and_write": false, 00:07:01.468 "abort": true, 00:07:01.468 "seek_hole": false, 00:07:01.468 "seek_data": false, 00:07:01.468 "copy": true, 00:07:01.468 "nvme_iov_md": false 00:07:01.468 }, 00:07:01.468 "memory_domains": [ 00:07:01.468 { 00:07:01.468 "dma_device_id": "system", 00:07:01.468 "dma_device_type": 1 00:07:01.468 }, 00:07:01.468 { 00:07:01.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.468 "dma_device_type": 2 00:07:01.468 } 00:07:01.468 ], 00:07:01.468 "driver_specific": { 00:07:01.468 "passthru": { 00:07:01.468 "name": "pt1", 00:07:01.468 "base_bdev_name": "malloc1" 00:07:01.468 } 00:07:01.468 } 00:07:01.468 }' 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:01.468 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:01.728 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:01.987 "name": "pt2", 00:07:01.987 "aliases": [ 00:07:01.987 "00000000-0000-0000-0000-000000000002" 00:07:01.987 ], 00:07:01.987 "product_name": "passthru", 00:07:01.987 "block_size": 512, 00:07:01.987 "num_blocks": 65536, 00:07:01.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.987 "assigned_rate_limits": { 00:07:01.987 "rw_ios_per_sec": 0, 00:07:01.987 "rw_mbytes_per_sec": 0, 00:07:01.987 "r_mbytes_per_sec": 0, 00:07:01.987 "w_mbytes_per_sec": 0 00:07:01.987 }, 00:07:01.987 "claimed": true, 00:07:01.987 "claim_type": "exclusive_write", 00:07:01.987 "zoned": false, 00:07:01.987 "supported_io_types": { 00:07:01.987 "read": true, 00:07:01.987 "write": true, 00:07:01.987 "unmap": true, 00:07:01.987 "flush": true, 00:07:01.987 "reset": true, 00:07:01.987 "nvme_admin": false, 00:07:01.987 "nvme_io": false, 00:07:01.987 "nvme_io_md": false, 00:07:01.987 "write_zeroes": true, 00:07:01.987 "zcopy": true, 00:07:01.987 "get_zone_info": false, 00:07:01.987 "zone_management": false, 00:07:01.987 "zone_append": false, 00:07:01.987 "compare": false, 00:07:01.987 "compare_and_write": false, 00:07:01.987 "abort": true, 00:07:01.987 "seek_hole": false, 00:07:01.987 "seek_data": false, 00:07:01.987 "copy": true, 00:07:01.987 "nvme_iov_md": false 00:07:01.987 }, 00:07:01.987 "memory_domains": [ 00:07:01.987 { 00:07:01.987 "dma_device_id": "system", 00:07:01.987 "dma_device_type": 1 00:07:01.987 }, 00:07:01.987 { 00:07:01.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.987 "dma_device_type": 2 00:07:01.987 } 00:07:01.987 ], 00:07:01.987 "driver_specific": { 00:07:01.987 "passthru": { 00:07:01.987 "name": "pt2", 00:07:01.987 "base_bdev_name": "malloc2" 00:07:01.987 } 00:07:01.987 } 00:07:01.987 }' 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:01.987 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:02.245 06:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:07:02.504 [2024-08-13 06:02:04.103289] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.504 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef 00:07:02.504 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef ']' 00:07:02.504 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:02.764 [2024-08-13 06:02:04.298698] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:02.764 [2024-08-13 06:02:04.298793] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.764 [2024-08-13 06:02:04.298884] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.764 [2024-08-13 06:02:04.298941] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.764 [2024-08-13 06:02:04.298955] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:02.764 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:07:02.764 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:02.764 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:07:02.764 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:07:02.764 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:02.764 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:03.022 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:03.023 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:03.281 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:03.281 06:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:03.540 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:03.798 [2024-08-13 06:02:05.344923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:03.798 [2024-08-13 06:02:05.346948] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:03.798 [2024-08-13 06:02:05.347070] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:03.798 [2024-08-13 06:02:05.347161] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:03.798 [2024-08-13 06:02:05.347218] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.798 [2024-08-13 06:02:05.347243] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:03.798 request: 00:07:03.798 { 00:07:03.798 "name": "raid_bdev1", 00:07:03.798 "raid_level": "raid0", 00:07:03.798 "base_bdevs": [ 00:07:03.798 "malloc1", 00:07:03.798 "malloc2" 00:07:03.798 ], 00:07:03.798 "strip_size_kb": 64, 00:07:03.798 "superblock": false, 00:07:03.798 "method": "bdev_raid_create", 00:07:03.798 "req_id": 1 00:07:03.798 } 00:07:03.798 Got JSON-RPC error response 00:07:03.798 response: 00:07:03.798 { 00:07:03.798 "code": -17, 00:07:03.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:03.798 } 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:07:03.798 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:07:03.799 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:07:03.799 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:04.057 [2024-08-13 06:02:05.740203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:04.057 [2024-08-13 06:02:05.740284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.057 [2024-08-13 06:02:05.740302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:04.057 [2024-08-13 06:02:05.740313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.057 [2024-08-13 06:02:05.742562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.057 [2024-08-13 06:02:05.742608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:04.057 [2024-08-13 06:02:05.742696] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:04.057 [2024-08-13 06:02:05.742735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:04.057 pt1 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:04.057 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.315 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:04.315 "name": "raid_bdev1", 00:07:04.315 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:04.315 "strip_size_kb": 64, 00:07:04.315 "state": "configuring", 00:07:04.315 "raid_level": "raid0", 00:07:04.315 "superblock": true, 00:07:04.315 "num_base_bdevs": 2, 00:07:04.315 "num_base_bdevs_discovered": 1, 00:07:04.315 "num_base_bdevs_operational": 2, 00:07:04.315 "base_bdevs_list": [ 00:07:04.315 { 00:07:04.315 "name": "pt1", 00:07:04.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.315 "is_configured": true, 00:07:04.315 "data_offset": 2048, 00:07:04.315 "data_size": 63488 00:07:04.315 }, 00:07:04.315 { 00:07:04.315 "name": null, 00:07:04.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.315 "is_configured": false, 00:07:04.315 "data_offset": 2048, 00:07:04.315 "data_size": 63488 00:07:04.315 } 00:07:04.315 ] 00:07:04.315 }' 00:07:04.315 06:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:04.315 06:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.880 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:07:04.880 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:07:04.880 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:04.880 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:04.880 [2024-08-13 06:02:06.670607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:04.880 [2024-08-13 06:02:06.670743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.880 [2024-08-13 06:02:06.670809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:04.880 [2024-08-13 06:02:06.670846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.880 [2024-08-13 06:02:06.671346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.880 [2024-08-13 06:02:06.671423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:05.138 [2024-08-13 06:02:06.671547] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:05.138 [2024-08-13 06:02:06.671606] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:05.138 [2024-08-13 06:02:06.671745] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:05.138 [2024-08-13 06:02:06.671789] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.138 [2024-08-13 06:02:06.672088] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:05.138 [2024-08-13 06:02:06.672252] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:05.138 [2024-08-13 06:02:06.672293] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:05.138 [2024-08-13 06:02:06.672466] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.138 pt2 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:05.138 "name": "raid_bdev1", 00:07:05.138 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:05.138 "strip_size_kb": 64, 00:07:05.138 "state": "online", 00:07:05.138 "raid_level": "raid0", 00:07:05.138 "superblock": true, 00:07:05.138 "num_base_bdevs": 2, 00:07:05.138 "num_base_bdevs_discovered": 2, 00:07:05.138 "num_base_bdevs_operational": 2, 00:07:05.138 "base_bdevs_list": [ 00:07:05.138 { 00:07:05.138 "name": "pt1", 00:07:05.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:05.138 "is_configured": true, 00:07:05.138 "data_offset": 2048, 00:07:05.138 "data_size": 63488 00:07:05.138 }, 00:07:05.138 { 00:07:05.138 "name": "pt2", 00:07:05.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:05.138 "is_configured": true, 00:07:05.138 "data_offset": 2048, 00:07:05.138 "data_size": 63488 00:07:05.138 } 00:07:05.138 ] 00:07:05.138 }' 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:05.138 06:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.706 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:07:05.706 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:05.706 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:05.706 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:05.706 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:05.706 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:05.966 [2024-08-13 06:02:07.681160] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:05.966 "name": "raid_bdev1", 00:07:05.966 "aliases": [ 00:07:05.966 "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef" 00:07:05.966 ], 00:07:05.966 "product_name": "Raid Volume", 00:07:05.966 "block_size": 512, 00:07:05.966 "num_blocks": 126976, 00:07:05.966 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:05.966 "assigned_rate_limits": { 00:07:05.966 "rw_ios_per_sec": 0, 00:07:05.966 "rw_mbytes_per_sec": 0, 00:07:05.966 "r_mbytes_per_sec": 0, 00:07:05.966 "w_mbytes_per_sec": 0 00:07:05.966 }, 00:07:05.966 "claimed": false, 00:07:05.966 "zoned": false, 00:07:05.966 "supported_io_types": { 00:07:05.966 "read": true, 00:07:05.966 "write": true, 00:07:05.966 "unmap": true, 00:07:05.966 "flush": true, 00:07:05.966 "reset": true, 00:07:05.966 "nvme_admin": false, 00:07:05.966 "nvme_io": false, 00:07:05.966 "nvme_io_md": false, 00:07:05.966 "write_zeroes": true, 00:07:05.966 "zcopy": false, 00:07:05.966 "get_zone_info": false, 00:07:05.966 "zone_management": false, 00:07:05.966 "zone_append": false, 00:07:05.966 "compare": false, 00:07:05.966 "compare_and_write": false, 00:07:05.966 "abort": false, 00:07:05.966 "seek_hole": false, 00:07:05.966 "seek_data": false, 00:07:05.966 "copy": false, 00:07:05.966 "nvme_iov_md": false 00:07:05.966 }, 00:07:05.966 "memory_domains": [ 00:07:05.966 { 00:07:05.966 "dma_device_id": "system", 00:07:05.966 "dma_device_type": 1 00:07:05.966 }, 00:07:05.966 { 00:07:05.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.966 "dma_device_type": 2 00:07:05.966 }, 00:07:05.966 { 00:07:05.966 "dma_device_id": "system", 00:07:05.966 "dma_device_type": 1 00:07:05.966 }, 00:07:05.966 { 00:07:05.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.966 "dma_device_type": 2 00:07:05.966 } 00:07:05.966 ], 00:07:05.966 "driver_specific": { 00:07:05.966 "raid": { 00:07:05.966 "uuid": "8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef", 00:07:05.966 "strip_size_kb": 64, 00:07:05.966 "state": "online", 00:07:05.966 "raid_level": "raid0", 00:07:05.966 "superblock": true, 00:07:05.966 "num_base_bdevs": 2, 00:07:05.966 "num_base_bdevs_discovered": 2, 00:07:05.966 "num_base_bdevs_operational": 2, 00:07:05.966 "base_bdevs_list": [ 00:07:05.966 { 00:07:05.966 "name": "pt1", 00:07:05.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:05.966 "is_configured": true, 00:07:05.966 "data_offset": 2048, 00:07:05.966 "data_size": 63488 00:07:05.966 }, 00:07:05.966 { 00:07:05.966 "name": "pt2", 00:07:05.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:05.966 "is_configured": true, 00:07:05.966 "data_offset": 2048, 00:07:05.966 "data_size": 63488 00:07:05.966 } 00:07:05.966 ] 00:07:05.966 } 00:07:05.966 } 00:07:05.966 }' 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:05.966 pt2' 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:05.966 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:06.227 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:06.227 "name": "pt1", 00:07:06.227 "aliases": [ 00:07:06.227 "00000000-0000-0000-0000-000000000001" 00:07:06.227 ], 00:07:06.227 "product_name": "passthru", 00:07:06.227 "block_size": 512, 00:07:06.227 "num_blocks": 65536, 00:07:06.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:06.227 "assigned_rate_limits": { 00:07:06.227 "rw_ios_per_sec": 0, 00:07:06.227 "rw_mbytes_per_sec": 0, 00:07:06.227 "r_mbytes_per_sec": 0, 00:07:06.227 "w_mbytes_per_sec": 0 00:07:06.227 }, 00:07:06.227 "claimed": true, 00:07:06.227 "claim_type": "exclusive_write", 00:07:06.227 "zoned": false, 00:07:06.227 "supported_io_types": { 00:07:06.227 "read": true, 00:07:06.227 "write": true, 00:07:06.227 "unmap": true, 00:07:06.227 "flush": true, 00:07:06.227 "reset": true, 00:07:06.227 "nvme_admin": false, 00:07:06.227 "nvme_io": false, 00:07:06.227 "nvme_io_md": false, 00:07:06.227 "write_zeroes": true, 00:07:06.227 "zcopy": true, 00:07:06.227 "get_zone_info": false, 00:07:06.227 "zone_management": false, 00:07:06.227 "zone_append": false, 00:07:06.227 "compare": false, 00:07:06.227 "compare_and_write": false, 00:07:06.227 "abort": true, 00:07:06.227 "seek_hole": false, 00:07:06.227 "seek_data": false, 00:07:06.227 "copy": true, 00:07:06.227 "nvme_iov_md": false 00:07:06.227 }, 00:07:06.227 "memory_domains": [ 00:07:06.227 { 00:07:06.227 "dma_device_id": "system", 00:07:06.227 "dma_device_type": 1 00:07:06.227 }, 00:07:06.227 { 00:07:06.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.227 "dma_device_type": 2 00:07:06.227 } 00:07:06.227 ], 00:07:06.227 "driver_specific": { 00:07:06.227 "passthru": { 00:07:06.227 "name": "pt1", 00:07:06.227 "base_bdev_name": "malloc1" 00:07:06.227 } 00:07:06.227 } 00:07:06.227 }' 00:07:06.227 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:06.227 06:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:06.484 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:06.742 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:06.742 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:06.742 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:06.742 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:06.743 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:06.743 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:06.743 "name": "pt2", 00:07:06.743 "aliases": [ 00:07:06.743 "00000000-0000-0000-0000-000000000002" 00:07:06.743 ], 00:07:06.743 "product_name": "passthru", 00:07:06.743 "block_size": 512, 00:07:06.743 "num_blocks": 65536, 00:07:06.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:06.743 "assigned_rate_limits": { 00:07:06.743 "rw_ios_per_sec": 0, 00:07:06.743 "rw_mbytes_per_sec": 0, 00:07:06.743 "r_mbytes_per_sec": 0, 00:07:06.743 "w_mbytes_per_sec": 0 00:07:06.743 }, 00:07:06.743 "claimed": true, 00:07:06.743 "claim_type": "exclusive_write", 00:07:06.743 "zoned": false, 00:07:06.743 "supported_io_types": { 00:07:06.743 "read": true, 00:07:06.743 "write": true, 00:07:06.743 "unmap": true, 00:07:06.743 "flush": true, 00:07:06.743 "reset": true, 00:07:06.743 "nvme_admin": false, 00:07:06.743 "nvme_io": false, 00:07:06.743 "nvme_io_md": false, 00:07:06.743 "write_zeroes": true, 00:07:06.743 "zcopy": true, 00:07:06.743 "get_zone_info": false, 00:07:06.743 "zone_management": false, 00:07:06.743 "zone_append": false, 00:07:06.743 "compare": false, 00:07:06.743 "compare_and_write": false, 00:07:06.743 "abort": true, 00:07:06.743 "seek_hole": false, 00:07:06.743 "seek_data": false, 00:07:06.743 "copy": true, 00:07:06.743 "nvme_iov_md": false 00:07:06.743 }, 00:07:06.743 "memory_domains": [ 00:07:06.743 { 00:07:06.743 "dma_device_id": "system", 00:07:06.743 "dma_device_type": 1 00:07:06.743 }, 00:07:06.743 { 00:07:06.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.743 "dma_device_type": 2 00:07:06.743 } 00:07:06.743 ], 00:07:06.743 "driver_specific": { 00:07:06.743 "passthru": { 00:07:06.743 "name": "pt2", 00:07:06.743 "base_bdev_name": "malloc2" 00:07:06.743 } 00:07:06.743 } 00:07:06.743 }' 00:07:06.743 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:07.002 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:07.262 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:07.262 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:07.262 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:07.262 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:07.262 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:07.262 06:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:07:07.521 [2024-08-13 06:02:09.098758] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef '!=' 8d8ae11f-73cb-4d1e-a55f-7c126c3ffbef ']' 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 71660 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 71660 ']' 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 71660 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71660 00:07:07.521 killing process with pid 71660 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71660' 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 71660 00:07:07.521 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 71660 00:07:07.521 [2024-08-13 06:02:09.174656] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.521 [2024-08-13 06:02:09.174766] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.521 [2024-08-13 06:02:09.174836] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.521 [2024-08-13 06:02:09.174853] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:07.521 [2024-08-13 06:02:09.197388] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.781 06:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:07:07.781 ************************************ 00:07:07.781 00:07:07.781 real 0m9.574s 00:07:07.781 user 0m17.192s 00:07:07.781 sys 0m1.494s 00:07:07.781 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.781 06:02:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.781 END TEST raid_superblock_test 00:07:07.781 ************************************ 00:07:07.781 06:02:09 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:07.781 06:02:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:07.781 06:02:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.781 06:02:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.781 ************************************ 00:07:07.781 START TEST raid_read_error_test 00:07:07.781 ************************************ 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 read 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.EWEBUIGxpW 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=71999 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 71999 /var/tmp/spdk-raid.sock 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 71999 ']' 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:07.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.781 06:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.041 [2024-08-13 06:02:09.594707] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:08.041 [2024-08-13 06:02:09.594928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71999 ] 00:07:08.041 [2024-08-13 06:02:09.739264] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.041 [2024-08-13 06:02:09.789534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.299 [2024-08-13 06:02:09.833071] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.299 [2024-08-13 06:02:09.833107] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.867 06:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.867 06:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:08.867 06:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:08.867 06:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:08.867 BaseBdev1_malloc 00:07:08.867 06:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:09.125 true 00:07:09.125 06:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:09.384 [2024-08-13 06:02:11.013650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:09.384 [2024-08-13 06:02:11.013787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.384 [2024-08-13 06:02:11.013818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:09.384 [2024-08-13 06:02:11.013832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.384 [2024-08-13 06:02:11.016020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.384 [2024-08-13 06:02:11.016079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:09.384 BaseBdev1 00:07:09.384 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:09.384 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:09.643 BaseBdev2_malloc 00:07:09.643 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:09.902 true 00:07:09.902 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.902 [2024-08-13 06:02:11.633477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.902 [2024-08-13 06:02:11.633584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.902 [2024-08-13 06:02:11.633622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:09.902 [2024-08-13 06:02:11.633633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.902 [2024-08-13 06:02:11.635818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.903 [2024-08-13 06:02:11.635863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.903 BaseBdev2 00:07:09.903 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:10.162 [2024-08-13 06:02:11.837192] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.162 [2024-08-13 06:02:11.839082] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.162 [2024-08-13 06:02:11.839319] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:10.162 [2024-08-13 06:02:11.839336] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.162 [2024-08-13 06:02:11.839649] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:10.162 [2024-08-13 06:02:11.839815] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:10.162 [2024-08-13 06:02:11.839831] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:10.162 [2024-08-13 06:02:11.839987] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.162 06:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.421 06:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:10.421 "name": "raid_bdev1", 00:07:10.421 "uuid": "3568d14f-e1d1-4e5d-97e4-bec0138140cf", 00:07:10.421 "strip_size_kb": 64, 00:07:10.421 "state": "online", 00:07:10.421 "raid_level": "raid0", 00:07:10.421 "superblock": true, 00:07:10.421 "num_base_bdevs": 2, 00:07:10.421 "num_base_bdevs_discovered": 2, 00:07:10.421 "num_base_bdevs_operational": 2, 00:07:10.421 "base_bdevs_list": [ 00:07:10.421 { 00:07:10.421 "name": "BaseBdev1", 00:07:10.421 "uuid": "c8810b04-3c47-5c0b-be50-4de44d948142", 00:07:10.421 "is_configured": true, 00:07:10.421 "data_offset": 2048, 00:07:10.421 "data_size": 63488 00:07:10.421 }, 00:07:10.421 { 00:07:10.421 "name": "BaseBdev2", 00:07:10.421 "uuid": "549043e9-3b6c-50f8-8bcd-5298b6636ce2", 00:07:10.421 "is_configured": true, 00:07:10.421 "data_offset": 2048, 00:07:10.421 "data_size": 63488 00:07:10.421 } 00:07:10.421 ] 00:07:10.421 }' 00:07:10.421 06:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:10.421 06:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.988 06:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:10.988 06:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:10.988 [2024-08-13 06:02:12.708087] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:11.948 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.208 06:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.467 06:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:12.467 "name": "raid_bdev1", 00:07:12.467 "uuid": "3568d14f-e1d1-4e5d-97e4-bec0138140cf", 00:07:12.467 "strip_size_kb": 64, 00:07:12.467 "state": "online", 00:07:12.467 "raid_level": "raid0", 00:07:12.467 "superblock": true, 00:07:12.467 "num_base_bdevs": 2, 00:07:12.467 "num_base_bdevs_discovered": 2, 00:07:12.467 "num_base_bdevs_operational": 2, 00:07:12.467 "base_bdevs_list": [ 00:07:12.467 { 00:07:12.467 "name": "BaseBdev1", 00:07:12.467 "uuid": "c8810b04-3c47-5c0b-be50-4de44d948142", 00:07:12.467 "is_configured": true, 00:07:12.467 "data_offset": 2048, 00:07:12.467 "data_size": 63488 00:07:12.467 }, 00:07:12.467 { 00:07:12.467 "name": "BaseBdev2", 00:07:12.467 "uuid": "549043e9-3b6c-50f8-8bcd-5298b6636ce2", 00:07:12.467 "is_configured": true, 00:07:12.467 "data_offset": 2048, 00:07:12.467 "data_size": 63488 00:07:12.467 } 00:07:12.467 ] 00:07:12.467 }' 00:07:12.467 06:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:12.467 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:13.036 [2024-08-13 06:02:14.754641] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:13.036 [2024-08-13 06:02:14.754762] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.036 [2024-08-13 06:02:14.757277] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.036 [2024-08-13 06:02:14.757373] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.036 [2024-08-13 06:02:14.757422] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.036 [2024-08-13 06:02:14.757463] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:13.036 0 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 71999 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 71999 ']' 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 71999 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71999 00:07:13.036 killing process with pid 71999 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71999' 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 71999 00:07:13.036 [2024-08-13 06:02:14.806380] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.036 06:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 71999 00:07:13.036 [2024-08-13 06:02:14.821994] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.EWEBUIGxpW 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:13.295 ************************************ 00:07:13.295 END TEST raid_read_error_test 00:07:13.295 ************************************ 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:07:13.295 00:07:13.295 real 0m5.564s 00:07:13.295 user 0m8.614s 00:07:13.295 sys 0m0.813s 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.295 06:02:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.554 06:02:15 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:13.554 06:02:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:13.554 06:02:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.555 06:02:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.555 ************************************ 00:07:13.555 START TEST raid_write_error_test 00:07:13.555 ************************************ 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 write 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Tjyu9mjrhA 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=72163 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 72163 /var/tmp/spdk-raid.sock 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 72163 ']' 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:13.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.555 06:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.555 [2024-08-13 06:02:15.233181] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:13.555 [2024-08-13 06:02:15.233397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:07:13.814 [2024-08-13 06:02:15.364870] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.814 [2024-08-13 06:02:15.417423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.814 [2024-08-13 06:02:15.460339] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.814 [2024-08-13 06:02:15.460381] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.380 06:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.380 06:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:14.380 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:14.380 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:14.639 BaseBdev1_malloc 00:07:14.639 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:15.025 true 00:07:15.025 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:15.025 [2024-08-13 06:02:16.672696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:15.025 [2024-08-13 06:02:16.672768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.025 [2024-08-13 06:02:16.672797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:15.025 [2024-08-13 06:02:16.672811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.025 [2024-08-13 06:02:16.675070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.025 BaseBdev1 00:07:15.025 [2024-08-13 06:02:16.675171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:15.025 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:15.025 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:15.300 BaseBdev2_malloc 00:07:15.300 06:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:15.300 true 00:07:15.558 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:15.558 [2024-08-13 06:02:17.293497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:15.558 [2024-08-13 06:02:17.293641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.558 [2024-08-13 06:02:17.293700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:15.558 [2024-08-13 06:02:17.293740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.558 [2024-08-13 06:02:17.295988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.558 [2024-08-13 06:02:17.296082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:15.558 BaseBdev2 00:07:15.558 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:15.817 [2024-08-13 06:02:17.489217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.817 [2024-08-13 06:02:17.491178] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.817 [2024-08-13 06:02:17.491461] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:15.817 [2024-08-13 06:02:17.491512] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.817 [2024-08-13 06:02:17.491818] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:15.817 [2024-08-13 06:02:17.491995] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:15.818 [2024-08-13 06:02:17.492048] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:15.818 [2024-08-13 06:02:17.492241] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.818 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.077 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:16.077 "name": "raid_bdev1", 00:07:16.077 "uuid": "5792bd5d-b863-4fb3-8d96-b914c023fc69", 00:07:16.077 "strip_size_kb": 64, 00:07:16.077 "state": "online", 00:07:16.077 "raid_level": "raid0", 00:07:16.077 "superblock": true, 00:07:16.077 "num_base_bdevs": 2, 00:07:16.077 "num_base_bdevs_discovered": 2, 00:07:16.077 "num_base_bdevs_operational": 2, 00:07:16.077 "base_bdevs_list": [ 00:07:16.077 { 00:07:16.077 "name": "BaseBdev1", 00:07:16.077 "uuid": "29fcf301-7935-5126-888f-f03c55f397d4", 00:07:16.077 "is_configured": true, 00:07:16.077 "data_offset": 2048, 00:07:16.077 "data_size": 63488 00:07:16.077 }, 00:07:16.077 { 00:07:16.077 "name": "BaseBdev2", 00:07:16.077 "uuid": "ab08f6aa-6bbd-596b-8200-36e4ac2dddd3", 00:07:16.077 "is_configured": true, 00:07:16.077 "data_offset": 2048, 00:07:16.077 "data_size": 63488 00:07:16.077 } 00:07:16.077 ] 00:07:16.077 }' 00:07:16.077 06:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:16.077 06:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.644 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:16.644 06:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:16.644 [2024-08-13 06:02:18.360066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:17.579 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.838 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.096 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:18.096 "name": "raid_bdev1", 00:07:18.096 "uuid": "5792bd5d-b863-4fb3-8d96-b914c023fc69", 00:07:18.096 "strip_size_kb": 64, 00:07:18.096 "state": "online", 00:07:18.096 "raid_level": "raid0", 00:07:18.096 "superblock": true, 00:07:18.096 "num_base_bdevs": 2, 00:07:18.096 "num_base_bdevs_discovered": 2, 00:07:18.096 "num_base_bdevs_operational": 2, 00:07:18.096 "base_bdevs_list": [ 00:07:18.096 { 00:07:18.096 "name": "BaseBdev1", 00:07:18.096 "uuid": "29fcf301-7935-5126-888f-f03c55f397d4", 00:07:18.096 "is_configured": true, 00:07:18.096 "data_offset": 2048, 00:07:18.096 "data_size": 63488 00:07:18.096 }, 00:07:18.096 { 00:07:18.096 "name": "BaseBdev2", 00:07:18.096 "uuid": "ab08f6aa-6bbd-596b-8200-36e4ac2dddd3", 00:07:18.096 "is_configured": true, 00:07:18.096 "data_offset": 2048, 00:07:18.096 "data_size": 63488 00:07:18.096 } 00:07:18.096 ] 00:07:18.096 }' 00:07:18.097 06:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:18.097 06:02:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.662 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:18.662 [2024-08-13 06:02:20.443122] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.663 [2024-08-13 06:02:20.443249] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.663 [2024-08-13 06:02:20.445728] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.663 [2024-08-13 06:02:20.445790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.663 [2024-08-13 06:02:20.445824] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.663 [2024-08-13 06:02:20.445835] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:18.663 0 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 72163 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 72163 ']' 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 72163 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72163 00:07:18.922 killing process with pid 72163 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72163' 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 72163 00:07:18.922 [2024-08-13 06:02:20.510403] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.922 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 72163 00:07:18.922 [2024-08-13 06:02:20.525639] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.181 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Tjyu9mjrhA 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:07:19.182 ************************************ 00:07:19.182 END TEST raid_write_error_test 00:07:19.182 ************************************ 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:07:19.182 00:07:19.182 real 0m5.629s 00:07:19.182 user 0m8.685s 00:07:19.182 sys 0m0.850s 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.182 06:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.182 06:02:20 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:07:19.182 06:02:20 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:19.182 06:02:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:19.182 06:02:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.182 06:02:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.182 ************************************ 00:07:19.182 START TEST raid_state_function_test 00:07:19.182 ************************************ 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=72328 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 72328' 00:07:19.182 Process raid pid: 72328 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 72328 /var/tmp/spdk-raid.sock 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 72328 ']' 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:19.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.182 06:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.182 [2024-08-13 06:02:20.927207] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:19.182 [2024-08-13 06:02:20.927409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.441 [2024-08-13 06:02:21.075470] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.441 [2024-08-13 06:02:21.122025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.441 [2024-08-13 06:02:21.165406] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.441 [2024-08-13 06:02:21.165440] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.008 06:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.008 06:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:07:20.008 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:20.266 [2024-08-13 06:02:21.957350] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.266 [2024-08-13 06:02:21.957418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.266 [2024-08-13 06:02:21.957433] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.266 [2024-08-13 06:02:21.957441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.266 06:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.524 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.524 "name": "Existed_Raid", 00:07:20.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.524 "strip_size_kb": 64, 00:07:20.524 "state": "configuring", 00:07:20.524 "raid_level": "concat", 00:07:20.524 "superblock": false, 00:07:20.524 "num_base_bdevs": 2, 00:07:20.524 "num_base_bdevs_discovered": 0, 00:07:20.524 "num_base_bdevs_operational": 2, 00:07:20.524 "base_bdevs_list": [ 00:07:20.524 { 00:07:20.524 "name": "BaseBdev1", 00:07:20.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.524 "is_configured": false, 00:07:20.524 "data_offset": 0, 00:07:20.524 "data_size": 0 00:07:20.524 }, 00:07:20.524 { 00:07:20.524 "name": "BaseBdev2", 00:07:20.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.524 "is_configured": false, 00:07:20.524 "data_offset": 0, 00:07:20.524 "data_size": 0 00:07:20.524 } 00:07:20.524 ] 00:07:20.524 }' 00:07:20.524 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.524 06:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.091 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:21.349 [2024-08-13 06:02:22.927823] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.349 [2024-08-13 06:02:22.927954] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:21.349 06:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:21.608 [2024-08-13 06:02:23.143409] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.608 [2024-08-13 06:02:23.143560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.608 [2024-08-13 06:02:23.143623] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.608 [2024-08-13 06:02:23.143647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.608 [2024-08-13 06:02:23.344116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.608 BaseBdev1 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:21.608 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:21.866 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.135 [ 00:07:22.135 { 00:07:22.135 "name": "BaseBdev1", 00:07:22.135 "aliases": [ 00:07:22.135 "6571070d-01c6-4c25-87b3-1ddfb66326ca" 00:07:22.135 ], 00:07:22.135 "product_name": "Malloc disk", 00:07:22.135 "block_size": 512, 00:07:22.135 "num_blocks": 65536, 00:07:22.135 "uuid": "6571070d-01c6-4c25-87b3-1ddfb66326ca", 00:07:22.135 "assigned_rate_limits": { 00:07:22.135 "rw_ios_per_sec": 0, 00:07:22.135 "rw_mbytes_per_sec": 0, 00:07:22.135 "r_mbytes_per_sec": 0, 00:07:22.135 "w_mbytes_per_sec": 0 00:07:22.135 }, 00:07:22.135 "claimed": true, 00:07:22.135 "claim_type": "exclusive_write", 00:07:22.135 "zoned": false, 00:07:22.135 "supported_io_types": { 00:07:22.135 "read": true, 00:07:22.135 "write": true, 00:07:22.135 "unmap": true, 00:07:22.135 "flush": true, 00:07:22.135 "reset": true, 00:07:22.135 "nvme_admin": false, 00:07:22.135 "nvme_io": false, 00:07:22.135 "nvme_io_md": false, 00:07:22.135 "write_zeroes": true, 00:07:22.135 "zcopy": true, 00:07:22.135 "get_zone_info": false, 00:07:22.135 "zone_management": false, 00:07:22.135 "zone_append": false, 00:07:22.135 "compare": false, 00:07:22.135 "compare_and_write": false, 00:07:22.135 "abort": true, 00:07:22.135 "seek_hole": false, 00:07:22.135 "seek_data": false, 00:07:22.135 "copy": true, 00:07:22.135 "nvme_iov_md": false 00:07:22.135 }, 00:07:22.135 "memory_domains": [ 00:07:22.135 { 00:07:22.135 "dma_device_id": "system", 00:07:22.135 "dma_device_type": 1 00:07:22.135 }, 00:07:22.135 { 00:07:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.135 "dma_device_type": 2 00:07:22.135 } 00:07:22.135 ], 00:07:22.135 "driver_specific": {} 00:07:22.135 } 00:07:22.135 ] 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.135 06:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.399 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:22.399 "name": "Existed_Raid", 00:07:22.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.399 "strip_size_kb": 64, 00:07:22.399 "state": "configuring", 00:07:22.399 "raid_level": "concat", 00:07:22.399 "superblock": false, 00:07:22.399 "num_base_bdevs": 2, 00:07:22.399 "num_base_bdevs_discovered": 1, 00:07:22.399 "num_base_bdevs_operational": 2, 00:07:22.399 "base_bdevs_list": [ 00:07:22.399 { 00:07:22.399 "name": "BaseBdev1", 00:07:22.400 "uuid": "6571070d-01c6-4c25-87b3-1ddfb66326ca", 00:07:22.400 "is_configured": true, 00:07:22.400 "data_offset": 0, 00:07:22.400 "data_size": 65536 00:07:22.400 }, 00:07:22.400 { 00:07:22.400 "name": "BaseBdev2", 00:07:22.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.400 "is_configured": false, 00:07:22.400 "data_offset": 0, 00:07:22.400 "data_size": 0 00:07:22.400 } 00:07:22.400 ] 00:07:22.400 }' 00:07:22.400 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:22.400 06:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.975 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:23.234 [2024-08-13 06:02:24.777677] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.234 [2024-08-13 06:02:24.777794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:23.234 [2024-08-13 06:02:24.949424] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.234 [2024-08-13 06:02:24.951278] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.234 [2024-08-13 06:02:24.951357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.234 06:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.493 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:23.493 "name": "Existed_Raid", 00:07:23.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.493 "strip_size_kb": 64, 00:07:23.493 "state": "configuring", 00:07:23.493 "raid_level": "concat", 00:07:23.493 "superblock": false, 00:07:23.493 "num_base_bdevs": 2, 00:07:23.493 "num_base_bdevs_discovered": 1, 00:07:23.493 "num_base_bdevs_operational": 2, 00:07:23.493 "base_bdevs_list": [ 00:07:23.493 { 00:07:23.493 "name": "BaseBdev1", 00:07:23.493 "uuid": "6571070d-01c6-4c25-87b3-1ddfb66326ca", 00:07:23.493 "is_configured": true, 00:07:23.493 "data_offset": 0, 00:07:23.493 "data_size": 65536 00:07:23.493 }, 00:07:23.493 { 00:07:23.493 "name": "BaseBdev2", 00:07:23.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.493 "is_configured": false, 00:07:23.493 "data_offset": 0, 00:07:23.493 "data_size": 0 00:07:23.493 } 00:07:23.493 ] 00:07:23.493 }' 00:07:23.493 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:23.493 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.061 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.320 [2024-08-13 06:02:25.882637] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.320 [2024-08-13 06:02:25.882766] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:24.320 [2024-08-13 06:02:25.882787] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.320 [2024-08-13 06:02:25.883124] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:24.320 [2024-08-13 06:02:25.883298] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:24.320 [2024-08-13 06:02:25.883310] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:24.320 [2024-08-13 06:02:25.883545] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.320 BaseBdev2 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:24.320 06:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:24.320 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.579 [ 00:07:24.579 { 00:07:24.579 "name": "BaseBdev2", 00:07:24.579 "aliases": [ 00:07:24.579 "272cc75c-3785-4e87-b99a-126e0e36a684" 00:07:24.579 ], 00:07:24.579 "product_name": "Malloc disk", 00:07:24.579 "block_size": 512, 00:07:24.579 "num_blocks": 65536, 00:07:24.579 "uuid": "272cc75c-3785-4e87-b99a-126e0e36a684", 00:07:24.579 "assigned_rate_limits": { 00:07:24.579 "rw_ios_per_sec": 0, 00:07:24.579 "rw_mbytes_per_sec": 0, 00:07:24.579 "r_mbytes_per_sec": 0, 00:07:24.579 "w_mbytes_per_sec": 0 00:07:24.579 }, 00:07:24.579 "claimed": true, 00:07:24.579 "claim_type": "exclusive_write", 00:07:24.579 "zoned": false, 00:07:24.579 "supported_io_types": { 00:07:24.579 "read": true, 00:07:24.579 "write": true, 00:07:24.579 "unmap": true, 00:07:24.579 "flush": true, 00:07:24.579 "reset": true, 00:07:24.579 "nvme_admin": false, 00:07:24.579 "nvme_io": false, 00:07:24.579 "nvme_io_md": false, 00:07:24.579 "write_zeroes": true, 00:07:24.579 "zcopy": true, 00:07:24.579 "get_zone_info": false, 00:07:24.579 "zone_management": false, 00:07:24.579 "zone_append": false, 00:07:24.579 "compare": false, 00:07:24.579 "compare_and_write": false, 00:07:24.579 "abort": true, 00:07:24.579 "seek_hole": false, 00:07:24.579 "seek_data": false, 00:07:24.579 "copy": true, 00:07:24.579 "nvme_iov_md": false 00:07:24.579 }, 00:07:24.579 "memory_domains": [ 00:07:24.579 { 00:07:24.579 "dma_device_id": "system", 00:07:24.579 "dma_device_type": 1 00:07:24.579 }, 00:07:24.579 { 00:07:24.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.579 "dma_device_type": 2 00:07:24.579 } 00:07:24.579 ], 00:07:24.579 "driver_specific": {} 00:07:24.579 } 00:07:24.579 ] 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.579 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.838 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:24.838 "name": "Existed_Raid", 00:07:24.838 "uuid": "56fd9d31-39bb-46bf-9e24-83247e971eb7", 00:07:24.838 "strip_size_kb": 64, 00:07:24.838 "state": "online", 00:07:24.838 "raid_level": "concat", 00:07:24.838 "superblock": false, 00:07:24.838 "num_base_bdevs": 2, 00:07:24.838 "num_base_bdevs_discovered": 2, 00:07:24.838 "num_base_bdevs_operational": 2, 00:07:24.838 "base_bdevs_list": [ 00:07:24.838 { 00:07:24.838 "name": "BaseBdev1", 00:07:24.838 "uuid": "6571070d-01c6-4c25-87b3-1ddfb66326ca", 00:07:24.838 "is_configured": true, 00:07:24.838 "data_offset": 0, 00:07:24.838 "data_size": 65536 00:07:24.838 }, 00:07:24.838 { 00:07:24.838 "name": "BaseBdev2", 00:07:24.838 "uuid": "272cc75c-3785-4e87-b99a-126e0e36a684", 00:07:24.838 "is_configured": true, 00:07:24.838 "data_offset": 0, 00:07:24.838 "data_size": 65536 00:07:24.838 } 00:07:24.838 ] 00:07:24.838 }' 00:07:24.838 06:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:24.838 06:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:25.407 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:25.667 [2024-08-13 06:02:27.268852] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.667 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:25.667 "name": "Existed_Raid", 00:07:25.667 "aliases": [ 00:07:25.667 "56fd9d31-39bb-46bf-9e24-83247e971eb7" 00:07:25.667 ], 00:07:25.667 "product_name": "Raid Volume", 00:07:25.667 "block_size": 512, 00:07:25.667 "num_blocks": 131072, 00:07:25.667 "uuid": "56fd9d31-39bb-46bf-9e24-83247e971eb7", 00:07:25.667 "assigned_rate_limits": { 00:07:25.667 "rw_ios_per_sec": 0, 00:07:25.667 "rw_mbytes_per_sec": 0, 00:07:25.667 "r_mbytes_per_sec": 0, 00:07:25.667 "w_mbytes_per_sec": 0 00:07:25.667 }, 00:07:25.667 "claimed": false, 00:07:25.667 "zoned": false, 00:07:25.667 "supported_io_types": { 00:07:25.667 "read": true, 00:07:25.667 "write": true, 00:07:25.667 "unmap": true, 00:07:25.667 "flush": true, 00:07:25.667 "reset": true, 00:07:25.667 "nvme_admin": false, 00:07:25.667 "nvme_io": false, 00:07:25.667 "nvme_io_md": false, 00:07:25.667 "write_zeroes": true, 00:07:25.667 "zcopy": false, 00:07:25.667 "get_zone_info": false, 00:07:25.667 "zone_management": false, 00:07:25.667 "zone_append": false, 00:07:25.667 "compare": false, 00:07:25.667 "compare_and_write": false, 00:07:25.667 "abort": false, 00:07:25.667 "seek_hole": false, 00:07:25.667 "seek_data": false, 00:07:25.667 "copy": false, 00:07:25.667 "nvme_iov_md": false 00:07:25.667 }, 00:07:25.667 "memory_domains": [ 00:07:25.667 { 00:07:25.667 "dma_device_id": "system", 00:07:25.667 "dma_device_type": 1 00:07:25.667 }, 00:07:25.667 { 00:07:25.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.667 "dma_device_type": 2 00:07:25.667 }, 00:07:25.667 { 00:07:25.667 "dma_device_id": "system", 00:07:25.667 "dma_device_type": 1 00:07:25.667 }, 00:07:25.667 { 00:07:25.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.667 "dma_device_type": 2 00:07:25.667 } 00:07:25.667 ], 00:07:25.667 "driver_specific": { 00:07:25.667 "raid": { 00:07:25.667 "uuid": "56fd9d31-39bb-46bf-9e24-83247e971eb7", 00:07:25.667 "strip_size_kb": 64, 00:07:25.667 "state": "online", 00:07:25.667 "raid_level": "concat", 00:07:25.667 "superblock": false, 00:07:25.667 "num_base_bdevs": 2, 00:07:25.667 "num_base_bdevs_discovered": 2, 00:07:25.667 "num_base_bdevs_operational": 2, 00:07:25.667 "base_bdevs_list": [ 00:07:25.667 { 00:07:25.667 "name": "BaseBdev1", 00:07:25.667 "uuid": "6571070d-01c6-4c25-87b3-1ddfb66326ca", 00:07:25.667 "is_configured": true, 00:07:25.667 "data_offset": 0, 00:07:25.667 "data_size": 65536 00:07:25.667 }, 00:07:25.667 { 00:07:25.667 "name": "BaseBdev2", 00:07:25.667 "uuid": "272cc75c-3785-4e87-b99a-126e0e36a684", 00:07:25.667 "is_configured": true, 00:07:25.667 "data_offset": 0, 00:07:25.667 "data_size": 65536 00:07:25.667 } 00:07:25.667 ] 00:07:25.667 } 00:07:25.667 } 00:07:25.667 }' 00:07:25.667 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.668 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:25.668 BaseBdev2' 00:07:25.668 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:25.668 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:25.668 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:25.927 "name": "BaseBdev1", 00:07:25.927 "aliases": [ 00:07:25.927 "6571070d-01c6-4c25-87b3-1ddfb66326ca" 00:07:25.927 ], 00:07:25.927 "product_name": "Malloc disk", 00:07:25.927 "block_size": 512, 00:07:25.927 "num_blocks": 65536, 00:07:25.927 "uuid": "6571070d-01c6-4c25-87b3-1ddfb66326ca", 00:07:25.927 "assigned_rate_limits": { 00:07:25.927 "rw_ios_per_sec": 0, 00:07:25.927 "rw_mbytes_per_sec": 0, 00:07:25.927 "r_mbytes_per_sec": 0, 00:07:25.927 "w_mbytes_per_sec": 0 00:07:25.927 }, 00:07:25.927 "claimed": true, 00:07:25.927 "claim_type": "exclusive_write", 00:07:25.927 "zoned": false, 00:07:25.927 "supported_io_types": { 00:07:25.927 "read": true, 00:07:25.927 "write": true, 00:07:25.927 "unmap": true, 00:07:25.927 "flush": true, 00:07:25.927 "reset": true, 00:07:25.927 "nvme_admin": false, 00:07:25.927 "nvme_io": false, 00:07:25.927 "nvme_io_md": false, 00:07:25.927 "write_zeroes": true, 00:07:25.927 "zcopy": true, 00:07:25.927 "get_zone_info": false, 00:07:25.927 "zone_management": false, 00:07:25.927 "zone_append": false, 00:07:25.927 "compare": false, 00:07:25.927 "compare_and_write": false, 00:07:25.927 "abort": true, 00:07:25.927 "seek_hole": false, 00:07:25.927 "seek_data": false, 00:07:25.927 "copy": true, 00:07:25.927 "nvme_iov_md": false 00:07:25.927 }, 00:07:25.927 "memory_domains": [ 00:07:25.927 { 00:07:25.927 "dma_device_id": "system", 00:07:25.927 "dma_device_type": 1 00:07:25.927 }, 00:07:25.927 { 00:07:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.927 "dma_device_type": 2 00:07:25.927 } 00:07:25.927 ], 00:07:25.927 "driver_specific": {} 00:07:25.927 }' 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:25.927 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:26.187 06:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:26.446 "name": "BaseBdev2", 00:07:26.446 "aliases": [ 00:07:26.446 "272cc75c-3785-4e87-b99a-126e0e36a684" 00:07:26.446 ], 00:07:26.446 "product_name": "Malloc disk", 00:07:26.446 "block_size": 512, 00:07:26.446 "num_blocks": 65536, 00:07:26.446 "uuid": "272cc75c-3785-4e87-b99a-126e0e36a684", 00:07:26.446 "assigned_rate_limits": { 00:07:26.446 "rw_ios_per_sec": 0, 00:07:26.446 "rw_mbytes_per_sec": 0, 00:07:26.446 "r_mbytes_per_sec": 0, 00:07:26.446 "w_mbytes_per_sec": 0 00:07:26.446 }, 00:07:26.446 "claimed": true, 00:07:26.446 "claim_type": "exclusive_write", 00:07:26.446 "zoned": false, 00:07:26.446 "supported_io_types": { 00:07:26.446 "read": true, 00:07:26.446 "write": true, 00:07:26.446 "unmap": true, 00:07:26.446 "flush": true, 00:07:26.446 "reset": true, 00:07:26.446 "nvme_admin": false, 00:07:26.446 "nvme_io": false, 00:07:26.446 "nvme_io_md": false, 00:07:26.446 "write_zeroes": true, 00:07:26.446 "zcopy": true, 00:07:26.446 "get_zone_info": false, 00:07:26.446 "zone_management": false, 00:07:26.446 "zone_append": false, 00:07:26.446 "compare": false, 00:07:26.446 "compare_and_write": false, 00:07:26.446 "abort": true, 00:07:26.446 "seek_hole": false, 00:07:26.446 "seek_data": false, 00:07:26.446 "copy": true, 00:07:26.446 "nvme_iov_md": false 00:07:26.446 }, 00:07:26.446 "memory_domains": [ 00:07:26.446 { 00:07:26.446 "dma_device_id": "system", 00:07:26.446 "dma_device_type": 1 00:07:26.446 }, 00:07:26.446 { 00:07:26.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.446 "dma_device_type": 2 00:07:26.446 } 00:07:26.446 ], 00:07:26.446 "driver_specific": {} 00:07:26.446 }' 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:26.446 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:26.706 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:26.706 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:26.706 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.706 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:26.706 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:26.706 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:26.967 [2024-08-13 06:02:28.614419] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.967 [2024-08-13 06:02:28.614543] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.967 [2024-08-13 06:02:28.614626] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:26.967 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.226 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:27.226 "name": "Existed_Raid", 00:07:27.226 "uuid": "56fd9d31-39bb-46bf-9e24-83247e971eb7", 00:07:27.226 "strip_size_kb": 64, 00:07:27.226 "state": "offline", 00:07:27.226 "raid_level": "concat", 00:07:27.226 "superblock": false, 00:07:27.226 "num_base_bdevs": 2, 00:07:27.226 "num_base_bdevs_discovered": 1, 00:07:27.226 "num_base_bdevs_operational": 1, 00:07:27.226 "base_bdevs_list": [ 00:07:27.226 { 00:07:27.227 "name": null, 00:07:27.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.227 "is_configured": false, 00:07:27.227 "data_offset": 0, 00:07:27.227 "data_size": 65536 00:07:27.227 }, 00:07:27.227 { 00:07:27.227 "name": "BaseBdev2", 00:07:27.227 "uuid": "272cc75c-3785-4e87-b99a-126e0e36a684", 00:07:27.227 "is_configured": true, 00:07:27.227 "data_offset": 0, 00:07:27.227 "data_size": 65536 00:07:27.227 } 00:07:27.227 ] 00:07:27.227 }' 00:07:27.227 06:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:27.227 06:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:27.796 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:27.796 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.796 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:28.055 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:28.055 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:28.055 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:28.055 [2024-08-13 06:02:29.839964] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:28.055 [2024-08-13 06:02:29.840160] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:28.315 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:28.315 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:28.315 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.315 06:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 72328 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 72328 ']' 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 72328 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.315 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72328 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.575 killing process with pid 72328 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72328' 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 72328 00:07:28.575 [2024-08-13 06:02:30.112766] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 72328 00:07:28.575 [2024-08-13 06:02:30.113908] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:28.575 00:07:28.575 real 0m9.511s 00:07:28.575 user 0m17.074s 00:07:28.575 sys 0m1.457s 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.575 06:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.575 ************************************ 00:07:28.575 END TEST raid_state_function_test 00:07:28.575 ************************************ 00:07:28.835 06:02:30 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:28.835 06:02:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:28.835 06:02:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.835 06:02:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.835 ************************************ 00:07:28.835 START TEST raid_state_function_test_sb 00:07:28.835 ************************************ 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:28.835 Process raid pid: 72668 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=72668 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 72668' 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 72668 /var/tmp/spdk-raid.sock 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 72668 ']' 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.835 06:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.835 [2024-08-13 06:02:30.503768] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:28.835 [2024-08-13 06:02:30.503876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.095 [2024-08-13 06:02:30.631531] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.095 [2024-08-13 06:02:30.679390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.095 [2024-08-13 06:02:30.723100] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.095 [2024-08-13 06:02:30.723139] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.664 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.664 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:07:29.664 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:29.924 [2024-08-13 06:02:31.519640] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.924 [2024-08-13 06:02:31.519700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.924 [2024-08-13 06:02:31.519721] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.924 [2024-08-13 06:02:31.519730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.924 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.183 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:30.183 "name": "Existed_Raid", 00:07:30.183 "uuid": "f626b743-9080-45cb-a28f-453fd7b380dc", 00:07:30.183 "strip_size_kb": 64, 00:07:30.183 "state": "configuring", 00:07:30.183 "raid_level": "concat", 00:07:30.183 "superblock": true, 00:07:30.183 "num_base_bdevs": 2, 00:07:30.183 "num_base_bdevs_discovered": 0, 00:07:30.183 "num_base_bdevs_operational": 2, 00:07:30.183 "base_bdevs_list": [ 00:07:30.183 { 00:07:30.183 "name": "BaseBdev1", 00:07:30.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.183 "is_configured": false, 00:07:30.183 "data_offset": 0, 00:07:30.183 "data_size": 0 00:07:30.183 }, 00:07:30.183 { 00:07:30.183 "name": "BaseBdev2", 00:07:30.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.183 "is_configured": false, 00:07:30.183 "data_offset": 0, 00:07:30.183 "data_size": 0 00:07:30.183 } 00:07:30.183 ] 00:07:30.183 }' 00:07:30.183 06:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:30.183 06:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.753 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:30.753 [2024-08-13 06:02:32.457923] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:30.753 [2024-08-13 06:02:32.458070] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:30.753 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:31.012 [2024-08-13 06:02:32.665620] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.012 [2024-08-13 06:02:32.665761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.012 [2024-08-13 06:02:32.665806] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.012 [2024-08-13 06:02:32.665828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.012 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.272 [2024-08-13 06:02:32.890450] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.272 BaseBdev1 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:31.272 06:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:31.531 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.531 [ 00:07:31.532 { 00:07:31.532 "name": "BaseBdev1", 00:07:31.532 "aliases": [ 00:07:31.532 "96db0d67-06a5-4991-a435-1dcd1af45f01" 00:07:31.532 ], 00:07:31.532 "product_name": "Malloc disk", 00:07:31.532 "block_size": 512, 00:07:31.532 "num_blocks": 65536, 00:07:31.532 "uuid": "96db0d67-06a5-4991-a435-1dcd1af45f01", 00:07:31.532 "assigned_rate_limits": { 00:07:31.532 "rw_ios_per_sec": 0, 00:07:31.532 "rw_mbytes_per_sec": 0, 00:07:31.532 "r_mbytes_per_sec": 0, 00:07:31.532 "w_mbytes_per_sec": 0 00:07:31.532 }, 00:07:31.532 "claimed": true, 00:07:31.532 "claim_type": "exclusive_write", 00:07:31.532 "zoned": false, 00:07:31.532 "supported_io_types": { 00:07:31.532 "read": true, 00:07:31.532 "write": true, 00:07:31.532 "unmap": true, 00:07:31.532 "flush": true, 00:07:31.532 "reset": true, 00:07:31.532 "nvme_admin": false, 00:07:31.532 "nvme_io": false, 00:07:31.532 "nvme_io_md": false, 00:07:31.532 "write_zeroes": true, 00:07:31.532 "zcopy": true, 00:07:31.532 "get_zone_info": false, 00:07:31.532 "zone_management": false, 00:07:31.532 "zone_append": false, 00:07:31.532 "compare": false, 00:07:31.532 "compare_and_write": false, 00:07:31.532 "abort": true, 00:07:31.532 "seek_hole": false, 00:07:31.532 "seek_data": false, 00:07:31.532 "copy": true, 00:07:31.532 "nvme_iov_md": false 00:07:31.532 }, 00:07:31.532 "memory_domains": [ 00:07:31.532 { 00:07:31.532 "dma_device_id": "system", 00:07:31.532 "dma_device_type": 1 00:07:31.532 }, 00:07:31.532 { 00:07:31.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.532 "dma_device_type": 2 00:07:31.532 } 00:07:31.532 ], 00:07:31.532 "driver_specific": {} 00:07:31.532 } 00:07:31.532 ] 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.532 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.791 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.791 "name": "Existed_Raid", 00:07:31.792 "uuid": "51100a3e-d0d4-4929-b337-312795160f2f", 00:07:31.792 "strip_size_kb": 64, 00:07:31.792 "state": "configuring", 00:07:31.792 "raid_level": "concat", 00:07:31.792 "superblock": true, 00:07:31.792 "num_base_bdevs": 2, 00:07:31.792 "num_base_bdevs_discovered": 1, 00:07:31.792 "num_base_bdevs_operational": 2, 00:07:31.792 "base_bdevs_list": [ 00:07:31.792 { 00:07:31.792 "name": "BaseBdev1", 00:07:31.792 "uuid": "96db0d67-06a5-4991-a435-1dcd1af45f01", 00:07:31.792 "is_configured": true, 00:07:31.792 "data_offset": 2048, 00:07:31.792 "data_size": 63488 00:07:31.792 }, 00:07:31.792 { 00:07:31.792 "name": "BaseBdev2", 00:07:31.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.792 "is_configured": false, 00:07:31.792 "data_offset": 0, 00:07:31.792 "data_size": 0 00:07:31.792 } 00:07:31.792 ] 00:07:31.792 }' 00:07:31.792 06:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.792 06:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.361 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:32.620 [2024-08-13 06:02:34.272174] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.620 [2024-08-13 06:02:34.272301] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:32.620 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:32.879 [2024-08-13 06:02:34.475889] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.879 [2024-08-13 06:02:34.478060] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.879 [2024-08-13 06:02:34.478143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.879 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.139 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:33.139 "name": "Existed_Raid", 00:07:33.139 "uuid": "d54c860e-6d7f-4c63-bb7e-08ceb3e0c58e", 00:07:33.139 "strip_size_kb": 64, 00:07:33.139 "state": "configuring", 00:07:33.139 "raid_level": "concat", 00:07:33.139 "superblock": true, 00:07:33.139 "num_base_bdevs": 2, 00:07:33.139 "num_base_bdevs_discovered": 1, 00:07:33.139 "num_base_bdevs_operational": 2, 00:07:33.139 "base_bdevs_list": [ 00:07:33.139 { 00:07:33.139 "name": "BaseBdev1", 00:07:33.139 "uuid": "96db0d67-06a5-4991-a435-1dcd1af45f01", 00:07:33.139 "is_configured": true, 00:07:33.139 "data_offset": 2048, 00:07:33.139 "data_size": 63488 00:07:33.139 }, 00:07:33.139 { 00:07:33.139 "name": "BaseBdev2", 00:07:33.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.139 "is_configured": false, 00:07:33.139 "data_offset": 0, 00:07:33.139 "data_size": 0 00:07:33.139 } 00:07:33.139 ] 00:07:33.139 }' 00:07:33.139 06:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:33.139 06:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.717 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.978 [2024-08-13 06:02:35.522771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.978 [2024-08-13 06:02:35.523114] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:33.978 [2024-08-13 06:02:35.523189] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.978 [2024-08-13 06:02:35.523568] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:33.978 [2024-08-13 06:02:35.523784] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:33.978 [2024-08-13 06:02:35.523842] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:33.978 [2024-08-13 06:02:35.524057] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.978 BaseBdev2 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:33.978 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:34.237 [ 00:07:34.237 { 00:07:34.237 "name": "BaseBdev2", 00:07:34.237 "aliases": [ 00:07:34.237 "4c90f535-5bfe-40ad-be1f-27813f93e843" 00:07:34.237 ], 00:07:34.237 "product_name": "Malloc disk", 00:07:34.237 "block_size": 512, 00:07:34.237 "num_blocks": 65536, 00:07:34.237 "uuid": "4c90f535-5bfe-40ad-be1f-27813f93e843", 00:07:34.237 "assigned_rate_limits": { 00:07:34.237 "rw_ios_per_sec": 0, 00:07:34.237 "rw_mbytes_per_sec": 0, 00:07:34.237 "r_mbytes_per_sec": 0, 00:07:34.237 "w_mbytes_per_sec": 0 00:07:34.237 }, 00:07:34.237 "claimed": true, 00:07:34.237 "claim_type": "exclusive_write", 00:07:34.237 "zoned": false, 00:07:34.237 "supported_io_types": { 00:07:34.237 "read": true, 00:07:34.237 "write": true, 00:07:34.237 "unmap": true, 00:07:34.237 "flush": true, 00:07:34.237 "reset": true, 00:07:34.237 "nvme_admin": false, 00:07:34.237 "nvme_io": false, 00:07:34.237 "nvme_io_md": false, 00:07:34.237 "write_zeroes": true, 00:07:34.237 "zcopy": true, 00:07:34.237 "get_zone_info": false, 00:07:34.237 "zone_management": false, 00:07:34.237 "zone_append": false, 00:07:34.237 "compare": false, 00:07:34.237 "compare_and_write": false, 00:07:34.237 "abort": true, 00:07:34.237 "seek_hole": false, 00:07:34.237 "seek_data": false, 00:07:34.237 "copy": true, 00:07:34.237 "nvme_iov_md": false 00:07:34.237 }, 00:07:34.237 "memory_domains": [ 00:07:34.237 { 00:07:34.237 "dma_device_id": "system", 00:07:34.237 "dma_device_type": 1 00:07:34.237 }, 00:07:34.237 { 00:07:34.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.237 "dma_device_type": 2 00:07:34.237 } 00:07:34.237 ], 00:07:34.237 "driver_specific": {} 00:07:34.237 } 00:07:34.237 ] 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.237 06:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.496 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:34.496 "name": "Existed_Raid", 00:07:34.496 "uuid": "d54c860e-6d7f-4c63-bb7e-08ceb3e0c58e", 00:07:34.496 "strip_size_kb": 64, 00:07:34.496 "state": "online", 00:07:34.496 "raid_level": "concat", 00:07:34.496 "superblock": true, 00:07:34.496 "num_base_bdevs": 2, 00:07:34.496 "num_base_bdevs_discovered": 2, 00:07:34.496 "num_base_bdevs_operational": 2, 00:07:34.496 "base_bdevs_list": [ 00:07:34.496 { 00:07:34.496 "name": "BaseBdev1", 00:07:34.496 "uuid": "96db0d67-06a5-4991-a435-1dcd1af45f01", 00:07:34.496 "is_configured": true, 00:07:34.496 "data_offset": 2048, 00:07:34.496 "data_size": 63488 00:07:34.496 }, 00:07:34.496 { 00:07:34.496 "name": "BaseBdev2", 00:07:34.496 "uuid": "4c90f535-5bfe-40ad-be1f-27813f93e843", 00:07:34.496 "is_configured": true, 00:07:34.496 "data_offset": 2048, 00:07:34.496 "data_size": 63488 00:07:34.496 } 00:07:34.496 ] 00:07:34.496 }' 00:07:34.496 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:34.496 06:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:35.063 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:35.323 [2024-08-13 06:02:36.916894] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.323 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:35.323 "name": "Existed_Raid", 00:07:35.323 "aliases": [ 00:07:35.323 "d54c860e-6d7f-4c63-bb7e-08ceb3e0c58e" 00:07:35.323 ], 00:07:35.323 "product_name": "Raid Volume", 00:07:35.323 "block_size": 512, 00:07:35.323 "num_blocks": 126976, 00:07:35.323 "uuid": "d54c860e-6d7f-4c63-bb7e-08ceb3e0c58e", 00:07:35.323 "assigned_rate_limits": { 00:07:35.323 "rw_ios_per_sec": 0, 00:07:35.323 "rw_mbytes_per_sec": 0, 00:07:35.323 "r_mbytes_per_sec": 0, 00:07:35.323 "w_mbytes_per_sec": 0 00:07:35.323 }, 00:07:35.323 "claimed": false, 00:07:35.323 "zoned": false, 00:07:35.323 "supported_io_types": { 00:07:35.323 "read": true, 00:07:35.323 "write": true, 00:07:35.323 "unmap": true, 00:07:35.323 "flush": true, 00:07:35.323 "reset": true, 00:07:35.323 "nvme_admin": false, 00:07:35.323 "nvme_io": false, 00:07:35.323 "nvme_io_md": false, 00:07:35.323 "write_zeroes": true, 00:07:35.323 "zcopy": false, 00:07:35.323 "get_zone_info": false, 00:07:35.323 "zone_management": false, 00:07:35.323 "zone_append": false, 00:07:35.323 "compare": false, 00:07:35.323 "compare_and_write": false, 00:07:35.323 "abort": false, 00:07:35.323 "seek_hole": false, 00:07:35.323 "seek_data": false, 00:07:35.323 "copy": false, 00:07:35.323 "nvme_iov_md": false 00:07:35.323 }, 00:07:35.323 "memory_domains": [ 00:07:35.323 { 00:07:35.323 "dma_device_id": "system", 00:07:35.323 "dma_device_type": 1 00:07:35.323 }, 00:07:35.323 { 00:07:35.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.323 "dma_device_type": 2 00:07:35.323 }, 00:07:35.323 { 00:07:35.323 "dma_device_id": "system", 00:07:35.323 "dma_device_type": 1 00:07:35.323 }, 00:07:35.323 { 00:07:35.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.323 "dma_device_type": 2 00:07:35.323 } 00:07:35.323 ], 00:07:35.323 "driver_specific": { 00:07:35.323 "raid": { 00:07:35.323 "uuid": "d54c860e-6d7f-4c63-bb7e-08ceb3e0c58e", 00:07:35.323 "strip_size_kb": 64, 00:07:35.323 "state": "online", 00:07:35.323 "raid_level": "concat", 00:07:35.323 "superblock": true, 00:07:35.323 "num_base_bdevs": 2, 00:07:35.323 "num_base_bdevs_discovered": 2, 00:07:35.323 "num_base_bdevs_operational": 2, 00:07:35.323 "base_bdevs_list": [ 00:07:35.323 { 00:07:35.323 "name": "BaseBdev1", 00:07:35.323 "uuid": "96db0d67-06a5-4991-a435-1dcd1af45f01", 00:07:35.323 "is_configured": true, 00:07:35.323 "data_offset": 2048, 00:07:35.323 "data_size": 63488 00:07:35.323 }, 00:07:35.323 { 00:07:35.323 "name": "BaseBdev2", 00:07:35.323 "uuid": "4c90f535-5bfe-40ad-be1f-27813f93e843", 00:07:35.323 "is_configured": true, 00:07:35.323 "data_offset": 2048, 00:07:35.323 "data_size": 63488 00:07:35.323 } 00:07:35.323 ] 00:07:35.323 } 00:07:35.323 } 00:07:35.323 }' 00:07:35.323 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.323 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:35.323 BaseBdev2' 00:07:35.323 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:35.323 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:35.323 06:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:35.583 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:35.583 "name": "BaseBdev1", 00:07:35.583 "aliases": [ 00:07:35.583 "96db0d67-06a5-4991-a435-1dcd1af45f01" 00:07:35.583 ], 00:07:35.583 "product_name": "Malloc disk", 00:07:35.583 "block_size": 512, 00:07:35.583 "num_blocks": 65536, 00:07:35.583 "uuid": "96db0d67-06a5-4991-a435-1dcd1af45f01", 00:07:35.583 "assigned_rate_limits": { 00:07:35.583 "rw_ios_per_sec": 0, 00:07:35.583 "rw_mbytes_per_sec": 0, 00:07:35.583 "r_mbytes_per_sec": 0, 00:07:35.583 "w_mbytes_per_sec": 0 00:07:35.583 }, 00:07:35.583 "claimed": true, 00:07:35.583 "claim_type": "exclusive_write", 00:07:35.583 "zoned": false, 00:07:35.583 "supported_io_types": { 00:07:35.583 "read": true, 00:07:35.583 "write": true, 00:07:35.583 "unmap": true, 00:07:35.583 "flush": true, 00:07:35.583 "reset": true, 00:07:35.583 "nvme_admin": false, 00:07:35.583 "nvme_io": false, 00:07:35.583 "nvme_io_md": false, 00:07:35.583 "write_zeroes": true, 00:07:35.583 "zcopy": true, 00:07:35.583 "get_zone_info": false, 00:07:35.583 "zone_management": false, 00:07:35.583 "zone_append": false, 00:07:35.583 "compare": false, 00:07:35.583 "compare_and_write": false, 00:07:35.583 "abort": true, 00:07:35.583 "seek_hole": false, 00:07:35.583 "seek_data": false, 00:07:35.583 "copy": true, 00:07:35.583 "nvme_iov_md": false 00:07:35.583 }, 00:07:35.583 "memory_domains": [ 00:07:35.583 { 00:07:35.583 "dma_device_id": "system", 00:07:35.583 "dma_device_type": 1 00:07:35.583 }, 00:07:35.583 { 00:07:35.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.583 "dma_device_type": 2 00:07:35.583 } 00:07:35.583 ], 00:07:35.583 "driver_specific": {} 00:07:35.583 }' 00:07:35.583 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:35.583 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:35.583 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:35.583 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:35.583 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:35.842 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:36.101 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:36.101 "name": "BaseBdev2", 00:07:36.101 "aliases": [ 00:07:36.101 "4c90f535-5bfe-40ad-be1f-27813f93e843" 00:07:36.101 ], 00:07:36.101 "product_name": "Malloc disk", 00:07:36.101 "block_size": 512, 00:07:36.101 "num_blocks": 65536, 00:07:36.101 "uuid": "4c90f535-5bfe-40ad-be1f-27813f93e843", 00:07:36.101 "assigned_rate_limits": { 00:07:36.101 "rw_ios_per_sec": 0, 00:07:36.101 "rw_mbytes_per_sec": 0, 00:07:36.101 "r_mbytes_per_sec": 0, 00:07:36.101 "w_mbytes_per_sec": 0 00:07:36.101 }, 00:07:36.101 "claimed": true, 00:07:36.101 "claim_type": "exclusive_write", 00:07:36.101 "zoned": false, 00:07:36.101 "supported_io_types": { 00:07:36.101 "read": true, 00:07:36.101 "write": true, 00:07:36.101 "unmap": true, 00:07:36.101 "flush": true, 00:07:36.101 "reset": true, 00:07:36.101 "nvme_admin": false, 00:07:36.101 "nvme_io": false, 00:07:36.101 "nvme_io_md": false, 00:07:36.101 "write_zeroes": true, 00:07:36.101 "zcopy": true, 00:07:36.101 "get_zone_info": false, 00:07:36.101 "zone_management": false, 00:07:36.101 "zone_append": false, 00:07:36.101 "compare": false, 00:07:36.101 "compare_and_write": false, 00:07:36.101 "abort": true, 00:07:36.101 "seek_hole": false, 00:07:36.101 "seek_data": false, 00:07:36.101 "copy": true, 00:07:36.101 "nvme_iov_md": false 00:07:36.101 }, 00:07:36.101 "memory_domains": [ 00:07:36.101 { 00:07:36.101 "dma_device_id": "system", 00:07:36.101 "dma_device_type": 1 00:07:36.101 }, 00:07:36.101 { 00:07:36.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.101 "dma_device_type": 2 00:07:36.101 } 00:07:36.101 ], 00:07:36.101 "driver_specific": {} 00:07:36.101 }' 00:07:36.101 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:36.101 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:36.360 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:36.360 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:36.360 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:36.360 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:36.360 06:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.360 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.360 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:36.360 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.360 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:36.619 [2024-08-13 06:02:38.358364] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:36.619 [2024-08-13 06:02:38.358402] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.619 [2024-08-13 06:02:38.358469] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.619 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.878 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:36.878 "name": "Existed_Raid", 00:07:36.878 "uuid": "d54c860e-6d7f-4c63-bb7e-08ceb3e0c58e", 00:07:36.878 "strip_size_kb": 64, 00:07:36.878 "state": "offline", 00:07:36.878 "raid_level": "concat", 00:07:36.878 "superblock": true, 00:07:36.878 "num_base_bdevs": 2, 00:07:36.878 "num_base_bdevs_discovered": 1, 00:07:36.878 "num_base_bdevs_operational": 1, 00:07:36.878 "base_bdevs_list": [ 00:07:36.878 { 00:07:36.878 "name": null, 00:07:36.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.878 "is_configured": false, 00:07:36.878 "data_offset": 2048, 00:07:36.878 "data_size": 63488 00:07:36.878 }, 00:07:36.878 { 00:07:36.878 "name": "BaseBdev2", 00:07:36.878 "uuid": "4c90f535-5bfe-40ad-be1f-27813f93e843", 00:07:36.878 "is_configured": true, 00:07:36.878 "data_offset": 2048, 00:07:36.878 "data_size": 63488 00:07:36.878 } 00:07:36.878 ] 00:07:36.878 }' 00:07:36.878 06:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:36.878 06:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.444 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:37.444 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:37.444 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.444 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:37.702 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:37.702 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:37.702 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:37.961 [2024-08-13 06:02:39.648185] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:37.961 [2024-08-13 06:02:39.648332] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:37.961 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:37.961 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:37.961 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.961 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 72668 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 72668 ']' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 72668 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72668 00:07:38.221 killing process with pid 72668 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72668' 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 72668 00:07:38.221 [2024-08-13 06:02:39.953431] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.221 06:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 72668 00:07:38.221 [2024-08-13 06:02:39.954493] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.481 06:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:38.481 00:07:38.481 real 0m9.789s 00:07:38.481 user 0m17.592s 00:07:38.481 sys 0m1.480s 00:07:38.481 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.481 ************************************ 00:07:38.481 END TEST raid_state_function_test_sb 00:07:38.481 ************************************ 00:07:38.481 06:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.481 06:02:40 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:38.481 06:02:40 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:38.481 06:02:40 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.481 06:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.740 ************************************ 00:07:38.740 START TEST raid_superblock_test 00:07:38.740 ************************************ 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:07:38.740 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=73013 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 73013 /var/tmp/spdk-raid.sock 00:07:38.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 73013 ']' 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:38.741 06:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.741 [2024-08-13 06:02:40.359596] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:38.741 [2024-08-13 06:02:40.359843] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:07:38.741 [2024-08-13 06:02:40.507185] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.000 [2024-08-13 06:02:40.555655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.000 [2024-08-13 06:02:40.600754] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.000 [2024-08-13 06:02:40.600870] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:39.567 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:39.871 malloc1 00:07:39.871 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:40.139 [2024-08-13 06:02:41.646685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:40.139 [2024-08-13 06:02:41.646851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.139 [2024-08-13 06:02:41.646896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:40.139 [2024-08-13 06:02:41.646949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.139 [2024-08-13 06:02:41.649430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.139 [2024-08-13 06:02:41.649522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:40.139 pt1 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:40.139 malloc2 00:07:40.139 06:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.399 [2024-08-13 06:02:42.107164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.399 [2024-08-13 06:02:42.107241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.399 [2024-08-13 06:02:42.107264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:40.399 [2024-08-13 06:02:42.107273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.399 [2024-08-13 06:02:42.109660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.399 [2024-08-13 06:02:42.109701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.399 pt2 00:07:40.399 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:40.399 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:40.399 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:40.658 [2024-08-13 06:02:42.334839] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:40.658 [2024-08-13 06:02:42.337024] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.658 [2024-08-13 06:02:42.337223] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:40.658 [2024-08-13 06:02:42.337238] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.658 [2024-08-13 06:02:42.337582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:40.658 [2024-08-13 06:02:42.337741] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:40.658 [2024-08-13 06:02:42.337756] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:40.658 [2024-08-13 06:02:42.337929] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.658 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.918 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:40.918 "name": "raid_bdev1", 00:07:40.918 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:40.918 "strip_size_kb": 64, 00:07:40.918 "state": "online", 00:07:40.918 "raid_level": "concat", 00:07:40.918 "superblock": true, 00:07:40.918 "num_base_bdevs": 2, 00:07:40.918 "num_base_bdevs_discovered": 2, 00:07:40.918 "num_base_bdevs_operational": 2, 00:07:40.918 "base_bdevs_list": [ 00:07:40.918 { 00:07:40.918 "name": "pt1", 00:07:40.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.918 "is_configured": true, 00:07:40.918 "data_offset": 2048, 00:07:40.918 "data_size": 63488 00:07:40.918 }, 00:07:40.918 { 00:07:40.918 "name": "pt2", 00:07:40.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.918 "is_configured": true, 00:07:40.918 "data_offset": 2048, 00:07:40.918 "data_size": 63488 00:07:40.918 } 00:07:40.918 ] 00:07:40.918 }' 00:07:40.918 06:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:40.918 06:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:41.487 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:41.746 [2024-08-13 06:02:43.317561] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.746 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:41.746 "name": "raid_bdev1", 00:07:41.746 "aliases": [ 00:07:41.746 "9971b167-4121-4c3e-bcfd-861f25e90f15" 00:07:41.746 ], 00:07:41.746 "product_name": "Raid Volume", 00:07:41.746 "block_size": 512, 00:07:41.746 "num_blocks": 126976, 00:07:41.746 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:41.746 "assigned_rate_limits": { 00:07:41.746 "rw_ios_per_sec": 0, 00:07:41.746 "rw_mbytes_per_sec": 0, 00:07:41.746 "r_mbytes_per_sec": 0, 00:07:41.746 "w_mbytes_per_sec": 0 00:07:41.746 }, 00:07:41.746 "claimed": false, 00:07:41.746 "zoned": false, 00:07:41.746 "supported_io_types": { 00:07:41.746 "read": true, 00:07:41.746 "write": true, 00:07:41.746 "unmap": true, 00:07:41.746 "flush": true, 00:07:41.746 "reset": true, 00:07:41.746 "nvme_admin": false, 00:07:41.746 "nvme_io": false, 00:07:41.746 "nvme_io_md": false, 00:07:41.746 "write_zeroes": true, 00:07:41.746 "zcopy": false, 00:07:41.746 "get_zone_info": false, 00:07:41.746 "zone_management": false, 00:07:41.746 "zone_append": false, 00:07:41.746 "compare": false, 00:07:41.746 "compare_and_write": false, 00:07:41.746 "abort": false, 00:07:41.746 "seek_hole": false, 00:07:41.746 "seek_data": false, 00:07:41.746 "copy": false, 00:07:41.746 "nvme_iov_md": false 00:07:41.746 }, 00:07:41.746 "memory_domains": [ 00:07:41.746 { 00:07:41.746 "dma_device_id": "system", 00:07:41.746 "dma_device_type": 1 00:07:41.746 }, 00:07:41.746 { 00:07:41.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.746 "dma_device_type": 2 00:07:41.746 }, 00:07:41.746 { 00:07:41.746 "dma_device_id": "system", 00:07:41.746 "dma_device_type": 1 00:07:41.746 }, 00:07:41.746 { 00:07:41.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.746 "dma_device_type": 2 00:07:41.747 } 00:07:41.747 ], 00:07:41.747 "driver_specific": { 00:07:41.747 "raid": { 00:07:41.747 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:41.747 "strip_size_kb": 64, 00:07:41.747 "state": "online", 00:07:41.747 "raid_level": "concat", 00:07:41.747 "superblock": true, 00:07:41.747 "num_base_bdevs": 2, 00:07:41.747 "num_base_bdevs_discovered": 2, 00:07:41.747 "num_base_bdevs_operational": 2, 00:07:41.747 "base_bdevs_list": [ 00:07:41.747 { 00:07:41.747 "name": "pt1", 00:07:41.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.747 "is_configured": true, 00:07:41.747 "data_offset": 2048, 00:07:41.747 "data_size": 63488 00:07:41.747 }, 00:07:41.747 { 00:07:41.747 "name": "pt2", 00:07:41.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.747 "is_configured": true, 00:07:41.747 "data_offset": 2048, 00:07:41.747 "data_size": 63488 00:07:41.747 } 00:07:41.747 ] 00:07:41.747 } 00:07:41.747 } 00:07:41.747 }' 00:07:41.747 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.747 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:41.747 pt2' 00:07:41.747 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:41.747 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:41.747 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:42.008 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:42.008 "name": "pt1", 00:07:42.008 "aliases": [ 00:07:42.008 "00000000-0000-0000-0000-000000000001" 00:07:42.008 ], 00:07:42.008 "product_name": "passthru", 00:07:42.008 "block_size": 512, 00:07:42.008 "num_blocks": 65536, 00:07:42.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.008 "assigned_rate_limits": { 00:07:42.008 "rw_ios_per_sec": 0, 00:07:42.008 "rw_mbytes_per_sec": 0, 00:07:42.008 "r_mbytes_per_sec": 0, 00:07:42.008 "w_mbytes_per_sec": 0 00:07:42.008 }, 00:07:42.008 "claimed": true, 00:07:42.008 "claim_type": "exclusive_write", 00:07:42.008 "zoned": false, 00:07:42.008 "supported_io_types": { 00:07:42.008 "read": true, 00:07:42.008 "write": true, 00:07:42.008 "unmap": true, 00:07:42.008 "flush": true, 00:07:42.008 "reset": true, 00:07:42.008 "nvme_admin": false, 00:07:42.008 "nvme_io": false, 00:07:42.008 "nvme_io_md": false, 00:07:42.008 "write_zeroes": true, 00:07:42.008 "zcopy": true, 00:07:42.008 "get_zone_info": false, 00:07:42.008 "zone_management": false, 00:07:42.008 "zone_append": false, 00:07:42.008 "compare": false, 00:07:42.008 "compare_and_write": false, 00:07:42.008 "abort": true, 00:07:42.008 "seek_hole": false, 00:07:42.008 "seek_data": false, 00:07:42.008 "copy": true, 00:07:42.008 "nvme_iov_md": false 00:07:42.008 }, 00:07:42.008 "memory_domains": [ 00:07:42.008 { 00:07:42.008 "dma_device_id": "system", 00:07:42.008 "dma_device_type": 1 00:07:42.008 }, 00:07:42.008 { 00:07:42.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.008 "dma_device_type": 2 00:07:42.008 } 00:07:42.008 ], 00:07:42.008 "driver_specific": { 00:07:42.008 "passthru": { 00:07:42.008 "name": "pt1", 00:07:42.009 "base_bdev_name": "malloc1" 00:07:42.009 } 00:07:42.009 } 00:07:42.009 }' 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:42.009 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:42.268 06:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:42.528 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:42.528 "name": "pt2", 00:07:42.528 "aliases": [ 00:07:42.528 "00000000-0000-0000-0000-000000000002" 00:07:42.528 ], 00:07:42.528 "product_name": "passthru", 00:07:42.528 "block_size": 512, 00:07:42.528 "num_blocks": 65536, 00:07:42.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.528 "assigned_rate_limits": { 00:07:42.528 "rw_ios_per_sec": 0, 00:07:42.528 "rw_mbytes_per_sec": 0, 00:07:42.528 "r_mbytes_per_sec": 0, 00:07:42.528 "w_mbytes_per_sec": 0 00:07:42.528 }, 00:07:42.528 "claimed": true, 00:07:42.528 "claim_type": "exclusive_write", 00:07:42.528 "zoned": false, 00:07:42.528 "supported_io_types": { 00:07:42.528 "read": true, 00:07:42.528 "write": true, 00:07:42.528 "unmap": true, 00:07:42.528 "flush": true, 00:07:42.528 "reset": true, 00:07:42.528 "nvme_admin": false, 00:07:42.528 "nvme_io": false, 00:07:42.528 "nvme_io_md": false, 00:07:42.528 "write_zeroes": true, 00:07:42.528 "zcopy": true, 00:07:42.528 "get_zone_info": false, 00:07:42.528 "zone_management": false, 00:07:42.528 "zone_append": false, 00:07:42.528 "compare": false, 00:07:42.528 "compare_and_write": false, 00:07:42.528 "abort": true, 00:07:42.528 "seek_hole": false, 00:07:42.528 "seek_data": false, 00:07:42.528 "copy": true, 00:07:42.528 "nvme_iov_md": false 00:07:42.528 }, 00:07:42.528 "memory_domains": [ 00:07:42.528 { 00:07:42.528 "dma_device_id": "system", 00:07:42.528 "dma_device_type": 1 00:07:42.528 }, 00:07:42.528 { 00:07:42.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.528 "dma_device_type": 2 00:07:42.528 } 00:07:42.528 ], 00:07:42.528 "driver_specific": { 00:07:42.528 "passthru": { 00:07:42.528 "name": "pt2", 00:07:42.528 "base_bdev_name": "malloc2" 00:07:42.528 } 00:07:42.528 } 00:07:42.528 }' 00:07:42.528 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:42.528 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:42.528 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:42.528 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.528 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:07:42.788 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:43.048 [2024-08-13 06:02:44.699138] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.048 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=9971b167-4121-4c3e-bcfd-861f25e90f15 00:07:43.048 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 9971b167-4121-4c3e-bcfd-861f25e90f15 ']' 00:07:43.048 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:43.307 [2024-08-13 06:02:44.910541] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.307 [2024-08-13 06:02:44.910580] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.307 [2024-08-13 06:02:44.910697] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.307 [2024-08-13 06:02:44.910772] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.307 [2024-08-13 06:02:44.910795] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:43.307 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:43.307 06:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:07:43.566 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:07:43.567 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:07:43.567 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.567 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:43.567 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.567 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:43.825 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:43.825 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:44.083 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.084 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:44.084 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:44.341 [2024-08-13 06:02:45.888872] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:44.341 [2024-08-13 06:02:45.890935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:44.342 [2024-08-13 06:02:45.891008] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:44.342 [2024-08-13 06:02:45.891077] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:44.342 [2024-08-13 06:02:45.891093] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.342 [2024-08-13 06:02:45.891104] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:44.342 request: 00:07:44.342 { 00:07:44.342 "name": "raid_bdev1", 00:07:44.342 "raid_level": "concat", 00:07:44.342 "base_bdevs": [ 00:07:44.342 "malloc1", 00:07:44.342 "malloc2" 00:07:44.342 ], 00:07:44.342 "strip_size_kb": 64, 00:07:44.342 "superblock": false, 00:07:44.342 "method": "bdev_raid_create", 00:07:44.342 "req_id": 1 00:07:44.342 } 00:07:44.342 Got JSON-RPC error response 00:07:44.342 response: 00:07:44.342 { 00:07:44.342 "code": -17, 00:07:44.342 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:44.342 } 00:07:44.342 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:07:44.342 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:44.342 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:44.342 06:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:44.342 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.342 06:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:07:44.342 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:07:44.342 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:07:44.342 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.599 [2024-08-13 06:02:46.308124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.599 [2024-08-13 06:02:46.308197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.599 [2024-08-13 06:02:46.308217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:44.599 [2024-08-13 06:02:46.308229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.599 [2024-08-13 06:02:46.310567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.599 [2024-08-13 06:02:46.310612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.599 [2024-08-13 06:02:46.310701] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:44.599 [2024-08-13 06:02:46.310739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.599 pt1 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.599 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.856 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:44.856 "name": "raid_bdev1", 00:07:44.856 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:44.856 "strip_size_kb": 64, 00:07:44.856 "state": "configuring", 00:07:44.856 "raid_level": "concat", 00:07:44.856 "superblock": true, 00:07:44.856 "num_base_bdevs": 2, 00:07:44.856 "num_base_bdevs_discovered": 1, 00:07:44.856 "num_base_bdevs_operational": 2, 00:07:44.856 "base_bdevs_list": [ 00:07:44.856 { 00:07:44.856 "name": "pt1", 00:07:44.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.856 "is_configured": true, 00:07:44.856 "data_offset": 2048, 00:07:44.856 "data_size": 63488 00:07:44.856 }, 00:07:44.856 { 00:07:44.856 "name": null, 00:07:44.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.856 "is_configured": false, 00:07:44.856 "data_offset": 2048, 00:07:44.856 "data_size": 63488 00:07:44.856 } 00:07:44.856 ] 00:07:44.856 }' 00:07:44.856 06:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:44.856 06:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.421 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:07:45.421 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:07:45.421 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:45.421 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.679 [2024-08-13 06:02:47.326433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.679 [2024-08-13 06:02:47.326511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.679 [2024-08-13 06:02:47.326536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:45.679 [2024-08-13 06:02:47.326549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.679 [2024-08-13 06:02:47.326996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.679 [2024-08-13 06:02:47.327020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.679 [2024-08-13 06:02:47.327120] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:45.679 [2024-08-13 06:02:47.327147] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.679 [2024-08-13 06:02:47.327272] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:45.679 [2024-08-13 06:02:47.327287] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.679 [2024-08-13 06:02:47.327567] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:45.679 [2024-08-13 06:02:47.327698] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:45.679 [2024-08-13 06:02:47.327708] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:45.679 [2024-08-13 06:02:47.327820] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.679 pt2 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.679 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.937 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:45.937 "name": "raid_bdev1", 00:07:45.938 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:45.938 "strip_size_kb": 64, 00:07:45.938 "state": "online", 00:07:45.938 "raid_level": "concat", 00:07:45.938 "superblock": true, 00:07:45.938 "num_base_bdevs": 2, 00:07:45.938 "num_base_bdevs_discovered": 2, 00:07:45.938 "num_base_bdevs_operational": 2, 00:07:45.938 "base_bdevs_list": [ 00:07:45.938 { 00:07:45.938 "name": "pt1", 00:07:45.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.938 "is_configured": true, 00:07:45.938 "data_offset": 2048, 00:07:45.938 "data_size": 63488 00:07:45.938 }, 00:07:45.938 { 00:07:45.938 "name": "pt2", 00:07:45.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.938 "is_configured": true, 00:07:45.938 "data_offset": 2048, 00:07:45.938 "data_size": 63488 00:07:45.938 } 00:07:45.938 ] 00:07:45.938 }' 00:07:45.938 06:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:45.938 06:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:46.503 [2024-08-13 06:02:48.265173] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:46.503 "name": "raid_bdev1", 00:07:46.503 "aliases": [ 00:07:46.503 "9971b167-4121-4c3e-bcfd-861f25e90f15" 00:07:46.503 ], 00:07:46.503 "product_name": "Raid Volume", 00:07:46.503 "block_size": 512, 00:07:46.503 "num_blocks": 126976, 00:07:46.503 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:46.503 "assigned_rate_limits": { 00:07:46.503 "rw_ios_per_sec": 0, 00:07:46.503 "rw_mbytes_per_sec": 0, 00:07:46.503 "r_mbytes_per_sec": 0, 00:07:46.503 "w_mbytes_per_sec": 0 00:07:46.503 }, 00:07:46.503 "claimed": false, 00:07:46.503 "zoned": false, 00:07:46.503 "supported_io_types": { 00:07:46.503 "read": true, 00:07:46.503 "write": true, 00:07:46.503 "unmap": true, 00:07:46.503 "flush": true, 00:07:46.503 "reset": true, 00:07:46.503 "nvme_admin": false, 00:07:46.503 "nvme_io": false, 00:07:46.503 "nvme_io_md": false, 00:07:46.503 "write_zeroes": true, 00:07:46.503 "zcopy": false, 00:07:46.503 "get_zone_info": false, 00:07:46.503 "zone_management": false, 00:07:46.503 "zone_append": false, 00:07:46.503 "compare": false, 00:07:46.503 "compare_and_write": false, 00:07:46.503 "abort": false, 00:07:46.503 "seek_hole": false, 00:07:46.503 "seek_data": false, 00:07:46.503 "copy": false, 00:07:46.503 "nvme_iov_md": false 00:07:46.503 }, 00:07:46.503 "memory_domains": [ 00:07:46.503 { 00:07:46.503 "dma_device_id": "system", 00:07:46.503 "dma_device_type": 1 00:07:46.503 }, 00:07:46.503 { 00:07:46.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.503 "dma_device_type": 2 00:07:46.503 }, 00:07:46.503 { 00:07:46.503 "dma_device_id": "system", 00:07:46.503 "dma_device_type": 1 00:07:46.503 }, 00:07:46.503 { 00:07:46.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.503 "dma_device_type": 2 00:07:46.503 } 00:07:46.503 ], 00:07:46.503 "driver_specific": { 00:07:46.503 "raid": { 00:07:46.503 "uuid": "9971b167-4121-4c3e-bcfd-861f25e90f15", 00:07:46.503 "strip_size_kb": 64, 00:07:46.503 "state": "online", 00:07:46.503 "raid_level": "concat", 00:07:46.503 "superblock": true, 00:07:46.503 "num_base_bdevs": 2, 00:07:46.503 "num_base_bdevs_discovered": 2, 00:07:46.503 "num_base_bdevs_operational": 2, 00:07:46.503 "base_bdevs_list": [ 00:07:46.503 { 00:07:46.503 "name": "pt1", 00:07:46.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.503 "is_configured": true, 00:07:46.503 "data_offset": 2048, 00:07:46.503 "data_size": 63488 00:07:46.503 }, 00:07:46.503 { 00:07:46.503 "name": "pt2", 00:07:46.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.503 "is_configured": true, 00:07:46.503 "data_offset": 2048, 00:07:46.503 "data_size": 63488 00:07:46.503 } 00:07:46.503 ] 00:07:46.503 } 00:07:46.503 } 00:07:46.503 }' 00:07:46.503 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.762 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:46.762 pt2' 00:07:46.762 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:46.763 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:46.763 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:46.763 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:46.763 "name": "pt1", 00:07:46.763 "aliases": [ 00:07:46.763 "00000000-0000-0000-0000-000000000001" 00:07:46.763 ], 00:07:46.763 "product_name": "passthru", 00:07:46.763 "block_size": 512, 00:07:46.763 "num_blocks": 65536, 00:07:46.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.763 "assigned_rate_limits": { 00:07:46.763 "rw_ios_per_sec": 0, 00:07:46.763 "rw_mbytes_per_sec": 0, 00:07:46.763 "r_mbytes_per_sec": 0, 00:07:46.763 "w_mbytes_per_sec": 0 00:07:46.763 }, 00:07:46.763 "claimed": true, 00:07:46.763 "claim_type": "exclusive_write", 00:07:46.763 "zoned": false, 00:07:46.763 "supported_io_types": { 00:07:46.763 "read": true, 00:07:46.763 "write": true, 00:07:46.763 "unmap": true, 00:07:46.763 "flush": true, 00:07:46.763 "reset": true, 00:07:46.763 "nvme_admin": false, 00:07:46.763 "nvme_io": false, 00:07:46.763 "nvme_io_md": false, 00:07:46.763 "write_zeroes": true, 00:07:46.763 "zcopy": true, 00:07:46.763 "get_zone_info": false, 00:07:46.763 "zone_management": false, 00:07:46.763 "zone_append": false, 00:07:46.763 "compare": false, 00:07:46.763 "compare_and_write": false, 00:07:46.763 "abort": true, 00:07:46.763 "seek_hole": false, 00:07:46.763 "seek_data": false, 00:07:46.763 "copy": true, 00:07:46.763 "nvme_iov_md": false 00:07:46.763 }, 00:07:46.763 "memory_domains": [ 00:07:46.763 { 00:07:46.763 "dma_device_id": "system", 00:07:46.763 "dma_device_type": 1 00:07:46.763 }, 00:07:46.763 { 00:07:46.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.763 "dma_device_type": 2 00:07:46.763 } 00:07:46.763 ], 00:07:46.763 "driver_specific": { 00:07:46.763 "passthru": { 00:07:46.763 "name": "pt1", 00:07:46.763 "base_bdev_name": "malloc1" 00:07:46.763 } 00:07:46.763 } 00:07:46.763 }' 00:07:46.763 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.021 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:47.279 06:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:47.538 "name": "pt2", 00:07:47.538 "aliases": [ 00:07:47.538 "00000000-0000-0000-0000-000000000002" 00:07:47.538 ], 00:07:47.538 "product_name": "passthru", 00:07:47.538 "block_size": 512, 00:07:47.538 "num_blocks": 65536, 00:07:47.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.538 "assigned_rate_limits": { 00:07:47.538 "rw_ios_per_sec": 0, 00:07:47.538 "rw_mbytes_per_sec": 0, 00:07:47.538 "r_mbytes_per_sec": 0, 00:07:47.538 "w_mbytes_per_sec": 0 00:07:47.538 }, 00:07:47.538 "claimed": true, 00:07:47.538 "claim_type": "exclusive_write", 00:07:47.538 "zoned": false, 00:07:47.538 "supported_io_types": { 00:07:47.538 "read": true, 00:07:47.538 "write": true, 00:07:47.538 "unmap": true, 00:07:47.538 "flush": true, 00:07:47.538 "reset": true, 00:07:47.538 "nvme_admin": false, 00:07:47.538 "nvme_io": false, 00:07:47.538 "nvme_io_md": false, 00:07:47.538 "write_zeroes": true, 00:07:47.538 "zcopy": true, 00:07:47.538 "get_zone_info": false, 00:07:47.538 "zone_management": false, 00:07:47.538 "zone_append": false, 00:07:47.538 "compare": false, 00:07:47.538 "compare_and_write": false, 00:07:47.538 "abort": true, 00:07:47.538 "seek_hole": false, 00:07:47.538 "seek_data": false, 00:07:47.538 "copy": true, 00:07:47.538 "nvme_iov_md": false 00:07:47.538 }, 00:07:47.538 "memory_domains": [ 00:07:47.538 { 00:07:47.538 "dma_device_id": "system", 00:07:47.538 "dma_device_type": 1 00:07:47.538 }, 00:07:47.538 { 00:07:47.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.538 "dma_device_type": 2 00:07:47.538 } 00:07:47.538 ], 00:07:47.538 "driver_specific": { 00:07:47.538 "passthru": { 00:07:47.538 "name": "pt2", 00:07:47.538 "base_bdev_name": "malloc2" 00:07:47.538 } 00:07:47.538 } 00:07:47.538 }' 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:47.538 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:47.795 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:07:48.054 [2024-08-13 06:02:49.654845] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 9971b167-4121-4c3e-bcfd-861f25e90f15 '!=' 9971b167-4121-4c3e-bcfd-861f25e90f15 ']' 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 73013 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 73013 ']' 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 73013 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73013 00:07:48.054 killing process with pid 73013 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73013' 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 73013 00:07:48.054 [2024-08-13 06:02:49.729238] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.054 [2024-08-13 06:02:49.729342] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.054 [2024-08-13 06:02:49.729399] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.054 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 73013 00:07:48.054 [2024-08-13 06:02:49.729412] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:48.054 [2024-08-13 06:02:49.752570] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.312 06:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:07:48.312 00:07:48.312 real 0m9.719s 00:07:48.312 user 0m17.503s 00:07:48.312 sys 0m1.504s 00:07:48.312 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.312 ************************************ 00:07:48.312 END TEST raid_superblock_test 00:07:48.312 ************************************ 00:07:48.312 06:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.312 06:02:50 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:48.312 06:02:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:48.312 06:02:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.312 06:02:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.312 ************************************ 00:07:48.312 START TEST raid_read_error_test 00:07:48.312 ************************************ 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 read 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:48.312 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Mb3RgQh6Nf 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=73346 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 73346 /var/tmp/spdk-raid.sock 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 73346 ']' 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.313 06:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.570 [2024-08-13 06:02:50.157681] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:48.570 [2024-08-13 06:02:50.157812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73346 ] 00:07:48.570 [2024-08-13 06:02:50.291361] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.570 [2024-08-13 06:02:50.340060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.829 [2024-08-13 06:02:50.383533] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.829 [2024-08-13 06:02:50.383650] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.395 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.395 06:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:49.395 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:49.395 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:49.653 BaseBdev1_malloc 00:07:49.653 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:49.653 true 00:07:49.653 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.911 [2024-08-13 06:02:51.604362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.911 [2024-08-13 06:02:51.604453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.911 [2024-08-13 06:02:51.604480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:49.911 [2024-08-13 06:02:51.604494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.911 [2024-08-13 06:02:51.606793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.911 [2024-08-13 06:02:51.606838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.911 BaseBdev1 00:07:49.911 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:49.911 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.169 BaseBdev2_malloc 00:07:50.169 06:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:50.427 true 00:07:50.427 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.685 [2024-08-13 06:02:52.264175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.685 [2024-08-13 06:02:52.264260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.685 [2024-08-13 06:02:52.264286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:50.685 [2024-08-13 06:02:52.264298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.686 [2024-08-13 06:02:52.266599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.686 [2024-08-13 06:02:52.266641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.686 BaseBdev2 00:07:50.686 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:50.944 [2024-08-13 06:02:52.507807] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.944 [2024-08-13 06:02:52.509907] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.944 [2024-08-13 06:02:52.510118] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:50.944 [2024-08-13 06:02:52.510140] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.944 [2024-08-13 06:02:52.510451] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:50.944 [2024-08-13 06:02:52.510586] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:50.944 [2024-08-13 06:02:52.510595] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:50.944 [2024-08-13 06:02:52.510737] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.944 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.203 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:51.203 "name": "raid_bdev1", 00:07:51.203 "uuid": "902a5a9a-6a42-435d-95cf-b94d5c794cfc", 00:07:51.203 "strip_size_kb": 64, 00:07:51.203 "state": "online", 00:07:51.203 "raid_level": "concat", 00:07:51.203 "superblock": true, 00:07:51.203 "num_base_bdevs": 2, 00:07:51.203 "num_base_bdevs_discovered": 2, 00:07:51.203 "num_base_bdevs_operational": 2, 00:07:51.203 "base_bdevs_list": [ 00:07:51.203 { 00:07:51.203 "name": "BaseBdev1", 00:07:51.203 "uuid": "ca2c084c-0028-53ba-a4c8-277eb1b1defa", 00:07:51.203 "is_configured": true, 00:07:51.203 "data_offset": 2048, 00:07:51.203 "data_size": 63488 00:07:51.203 }, 00:07:51.203 { 00:07:51.203 "name": "BaseBdev2", 00:07:51.203 "uuid": "daaed74a-319c-52bb-9da8-94932b417f8f", 00:07:51.203 "is_configured": true, 00:07:51.203 "data_offset": 2048, 00:07:51.203 "data_size": 63488 00:07:51.203 } 00:07:51.203 ] 00:07:51.203 }' 00:07:51.203 06:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:51.203 06:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.768 06:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:51.768 06:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:51.768 [2024-08-13 06:02:53.454568] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:52.708 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.978 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.236 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:53.236 "name": "raid_bdev1", 00:07:53.236 "uuid": "902a5a9a-6a42-435d-95cf-b94d5c794cfc", 00:07:53.236 "strip_size_kb": 64, 00:07:53.236 "state": "online", 00:07:53.236 "raid_level": "concat", 00:07:53.236 "superblock": true, 00:07:53.236 "num_base_bdevs": 2, 00:07:53.236 "num_base_bdevs_discovered": 2, 00:07:53.236 "num_base_bdevs_operational": 2, 00:07:53.236 "base_bdevs_list": [ 00:07:53.236 { 00:07:53.236 "name": "BaseBdev1", 00:07:53.236 "uuid": "ca2c084c-0028-53ba-a4c8-277eb1b1defa", 00:07:53.236 "is_configured": true, 00:07:53.236 "data_offset": 2048, 00:07:53.236 "data_size": 63488 00:07:53.236 }, 00:07:53.236 { 00:07:53.236 "name": "BaseBdev2", 00:07:53.236 "uuid": "daaed74a-319c-52bb-9da8-94932b417f8f", 00:07:53.236 "is_configured": true, 00:07:53.236 "data_offset": 2048, 00:07:53.236 "data_size": 63488 00:07:53.236 } 00:07:53.236 ] 00:07:53.236 }' 00:07:53.236 06:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:53.236 06:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:53.803 [2024-08-13 06:02:55.514367] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.803 [2024-08-13 06:02:55.514484] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.803 [2024-08-13 06:02:55.517177] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.803 [2024-08-13 06:02:55.517272] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.803 [2024-08-13 06:02:55.517328] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.803 [2024-08-13 06:02:55.517415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:53.803 0 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 73346 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 73346 ']' 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 73346 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73346 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.803 killing process with pid 73346 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73346' 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 73346 00:07:53.803 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 73346 00:07:53.803 [2024-08-13 06:02:55.560828] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.803 [2024-08-13 06:02:55.576244] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Mb3RgQh6Nf 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:54.062 ************************************ 00:07:54.062 END TEST raid_read_error_test 00:07:54.062 ************************************ 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:07:54.062 00:07:54.062 real 0m5.756s 00:07:54.062 user 0m8.976s 00:07:54.062 sys 0m0.815s 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.062 06:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.322 06:02:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:54.322 06:02:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:54.322 06:02:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.322 06:02:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.322 ************************************ 00:07:54.322 START TEST raid_write_error_test 00:07:54.322 ************************************ 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 write 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.KlyqdGMOmx 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=73517 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 73517 /var/tmp/spdk-raid.sock 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 73517 ']' 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.322 06:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.322 [2024-08-13 06:02:55.979060] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:54.322 [2024-08-13 06:02:55.979175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73517 ] 00:07:54.581 [2024-08-13 06:02:56.123560] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.581 [2024-08-13 06:02:56.169420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.581 [2024-08-13 06:02:56.211079] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.581 [2024-08-13 06:02:56.211118] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.150 06:02:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.150 06:02:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:07:55.150 06:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:55.150 06:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:55.408 BaseBdev1_malloc 00:07:55.408 06:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:55.408 true 00:07:55.408 06:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:55.667 [2024-08-13 06:02:57.386210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:55.667 [2024-08-13 06:02:57.386295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.667 [2024-08-13 06:02:57.386325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:55.667 [2024-08-13 06:02:57.386347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.667 [2024-08-13 06:02:57.388607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.667 [2024-08-13 06:02:57.388708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:55.667 BaseBdev1 00:07:55.667 06:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:55.667 06:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:55.926 BaseBdev2_malloc 00:07:55.926 06:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:56.186 true 00:07:56.186 06:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:56.445 [2024-08-13 06:02:57.982102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:56.445 [2024-08-13 06:02:57.982253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.445 [2024-08-13 06:02:57.982296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:56.445 [2024-08-13 06:02:57.982327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.445 [2024-08-13 06:02:57.984505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.445 [2024-08-13 06:02:57.984602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:56.445 BaseBdev2 00:07:56.445 06:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:56.445 [2024-08-13 06:02:58.181811] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.445 [2024-08-13 06:02:58.183785] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.445 [2024-08-13 06:02:58.184062] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:56.445 [2024-08-13 06:02:58.184114] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.445 [2024-08-13 06:02:58.184443] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:56.445 [2024-08-13 06:02:58.184639] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:56.445 [2024-08-13 06:02:58.184682] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:56.445 [2024-08-13 06:02:58.184904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:56.445 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:56.446 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:56.446 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.446 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.705 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:56.705 "name": "raid_bdev1", 00:07:56.705 "uuid": "de855aef-cfbc-429d-a95b-71a49bb92663", 00:07:56.705 "strip_size_kb": 64, 00:07:56.705 "state": "online", 00:07:56.705 "raid_level": "concat", 00:07:56.705 "superblock": true, 00:07:56.705 "num_base_bdevs": 2, 00:07:56.705 "num_base_bdevs_discovered": 2, 00:07:56.705 "num_base_bdevs_operational": 2, 00:07:56.705 "base_bdevs_list": [ 00:07:56.705 { 00:07:56.705 "name": "BaseBdev1", 00:07:56.705 "uuid": "47911a0f-a23e-5d6e-8dc0-b8a829391f45", 00:07:56.705 "is_configured": true, 00:07:56.705 "data_offset": 2048, 00:07:56.705 "data_size": 63488 00:07:56.705 }, 00:07:56.705 { 00:07:56.705 "name": "BaseBdev2", 00:07:56.705 "uuid": "5ae1053f-63ce-571b-adb7-239f4cbeff33", 00:07:56.705 "is_configured": true, 00:07:56.705 "data_offset": 2048, 00:07:56.705 "data_size": 63488 00:07:56.705 } 00:07:56.705 ] 00:07:56.705 }' 00:07:56.705 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:56.705 06:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.274 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:57.274 06:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:57.274 [2024-08-13 06:02:58.960803] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:58.214 06:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.474 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.733 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:58.733 "name": "raid_bdev1", 00:07:58.733 "uuid": "de855aef-cfbc-429d-a95b-71a49bb92663", 00:07:58.733 "strip_size_kb": 64, 00:07:58.733 "state": "online", 00:07:58.733 "raid_level": "concat", 00:07:58.733 "superblock": true, 00:07:58.733 "num_base_bdevs": 2, 00:07:58.733 "num_base_bdevs_discovered": 2, 00:07:58.733 "num_base_bdevs_operational": 2, 00:07:58.733 "base_bdevs_list": [ 00:07:58.733 { 00:07:58.733 "name": "BaseBdev1", 00:07:58.733 "uuid": "47911a0f-a23e-5d6e-8dc0-b8a829391f45", 00:07:58.733 "is_configured": true, 00:07:58.733 "data_offset": 2048, 00:07:58.733 "data_size": 63488 00:07:58.733 }, 00:07:58.733 { 00:07:58.733 "name": "BaseBdev2", 00:07:58.733 "uuid": "5ae1053f-63ce-571b-adb7-239f4cbeff33", 00:07:58.733 "is_configured": true, 00:07:58.733 "data_offset": 2048, 00:07:58.733 "data_size": 63488 00:07:58.733 } 00:07:58.733 ] 00:07:58.733 }' 00:07:58.733 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:58.733 06:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.301 06:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:59.301 [2024-08-13 06:03:01.071398] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.301 [2024-08-13 06:03:01.071516] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.301 [2024-08-13 06:03:01.073930] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.301 [2024-08-13 06:03:01.074037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.301 [2024-08-13 06:03:01.074088] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.301 [2024-08-13 06:03:01.074132] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:59.301 0 00:07:59.301 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 73517 00:07:59.301 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 73517 ']' 00:07:59.301 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 73517 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73517 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73517' 00:07:59.562 killing process with pid 73517 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 73517 00:07:59.562 [2024-08-13 06:03:01.133625] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.562 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 73517 00:07:59.562 [2024-08-13 06:03:01.148766] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.KlyqdGMOmx 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:07:59.829 00:07:59.829 real 0m5.496s 00:07:59.829 user 0m8.504s 00:07:59.829 sys 0m0.768s 00:07:59.829 ************************************ 00:07:59.829 END TEST raid_write_error_test 00:07:59.829 ************************************ 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.829 06:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.829 06:03:01 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:07:59.829 06:03:01 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:59.829 06:03:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:59.829 06:03:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.829 06:03:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.829 ************************************ 00:07:59.829 START TEST raid_state_function_test 00:07:59.829 ************************************ 00:07:59.829 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=73675 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 73675' 00:07:59.830 Process raid pid: 73675 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 73675 /var/tmp/spdk-raid.sock 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 73675 ']' 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:59.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.830 06:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.830 [2024-08-13 06:03:01.543245] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:07:59.830 [2024-08-13 06:03:01.543365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.088 [2024-08-13 06:03:01.670560] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.088 [2024-08-13 06:03:01.720071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.088 [2024-08-13 06:03:01.764531] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.088 [2024-08-13 06:03:01.764567] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.656 06:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.656 06:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:08:00.656 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:00.915 [2024-08-13 06:03:02.580504] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.915 [2024-08-13 06:03:02.580641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.915 [2024-08-13 06:03:02.580660] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.915 [2024-08-13 06:03:02.580669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.915 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.174 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:01.174 "name": "Existed_Raid", 00:08:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.174 "strip_size_kb": 0, 00:08:01.174 "state": "configuring", 00:08:01.174 "raid_level": "raid1", 00:08:01.174 "superblock": false, 00:08:01.174 "num_base_bdevs": 2, 00:08:01.174 "num_base_bdevs_discovered": 0, 00:08:01.174 "num_base_bdevs_operational": 2, 00:08:01.174 "base_bdevs_list": [ 00:08:01.174 { 00:08:01.174 "name": "BaseBdev1", 00:08:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.174 "is_configured": false, 00:08:01.174 "data_offset": 0, 00:08:01.174 "data_size": 0 00:08:01.174 }, 00:08:01.174 { 00:08:01.174 "name": "BaseBdev2", 00:08:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.174 "is_configured": false, 00:08:01.174 "data_offset": 0, 00:08:01.174 "data_size": 0 00:08:01.174 } 00:08:01.174 ] 00:08:01.174 }' 00:08:01.174 06:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:01.174 06:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.742 06:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:01.742 [2024-08-13 06:03:03.530765] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.742 [2024-08-13 06:03:03.530896] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:02.003 06:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:02.003 [2024-08-13 06:03:03.734375] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.003 [2024-08-13 06:03:03.734502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.003 [2024-08-13 06:03:03.734564] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.003 [2024-08-13 06:03:03.734586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.003 06:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.263 [2024-08-13 06:03:03.942741] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.263 BaseBdev1 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:02.263 06:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:02.522 06:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.781 [ 00:08:02.781 { 00:08:02.781 "name": "BaseBdev1", 00:08:02.781 "aliases": [ 00:08:02.781 "89339296-4f32-4ad5-857d-a1ce8c20775a" 00:08:02.781 ], 00:08:02.781 "product_name": "Malloc disk", 00:08:02.781 "block_size": 512, 00:08:02.781 "num_blocks": 65536, 00:08:02.781 "uuid": "89339296-4f32-4ad5-857d-a1ce8c20775a", 00:08:02.781 "assigned_rate_limits": { 00:08:02.781 "rw_ios_per_sec": 0, 00:08:02.781 "rw_mbytes_per_sec": 0, 00:08:02.781 "r_mbytes_per_sec": 0, 00:08:02.781 "w_mbytes_per_sec": 0 00:08:02.781 }, 00:08:02.781 "claimed": true, 00:08:02.781 "claim_type": "exclusive_write", 00:08:02.781 "zoned": false, 00:08:02.781 "supported_io_types": { 00:08:02.781 "read": true, 00:08:02.781 "write": true, 00:08:02.781 "unmap": true, 00:08:02.781 "flush": true, 00:08:02.781 "reset": true, 00:08:02.781 "nvme_admin": false, 00:08:02.781 "nvme_io": false, 00:08:02.781 "nvme_io_md": false, 00:08:02.781 "write_zeroes": true, 00:08:02.781 "zcopy": true, 00:08:02.781 "get_zone_info": false, 00:08:02.781 "zone_management": false, 00:08:02.781 "zone_append": false, 00:08:02.781 "compare": false, 00:08:02.781 "compare_and_write": false, 00:08:02.781 "abort": true, 00:08:02.781 "seek_hole": false, 00:08:02.781 "seek_data": false, 00:08:02.781 "copy": true, 00:08:02.781 "nvme_iov_md": false 00:08:02.781 }, 00:08:02.781 "memory_domains": [ 00:08:02.781 { 00:08:02.781 "dma_device_id": "system", 00:08:02.781 "dma_device_type": 1 00:08:02.781 }, 00:08:02.781 { 00:08:02.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.781 "dma_device_type": 2 00:08:02.781 } 00:08:02.781 ], 00:08:02.781 "driver_specific": {} 00:08:02.781 } 00:08:02.781 ] 00:08:02.781 06:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:02.781 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.781 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:02.782 "name": "Existed_Raid", 00:08:02.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.782 "strip_size_kb": 0, 00:08:02.782 "state": "configuring", 00:08:02.782 "raid_level": "raid1", 00:08:02.782 "superblock": false, 00:08:02.782 "num_base_bdevs": 2, 00:08:02.782 "num_base_bdevs_discovered": 1, 00:08:02.782 "num_base_bdevs_operational": 2, 00:08:02.782 "base_bdevs_list": [ 00:08:02.782 { 00:08:02.782 "name": "BaseBdev1", 00:08:02.782 "uuid": "89339296-4f32-4ad5-857d-a1ce8c20775a", 00:08:02.782 "is_configured": true, 00:08:02.782 "data_offset": 0, 00:08:02.782 "data_size": 65536 00:08:02.782 }, 00:08:02.782 { 00:08:02.782 "name": "BaseBdev2", 00:08:02.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.782 "is_configured": false, 00:08:02.782 "data_offset": 0, 00:08:02.782 "data_size": 0 00:08:02.782 } 00:08:02.782 ] 00:08:02.782 }' 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:02.782 06:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.350 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:03.608 [2024-08-13 06:03:05.284546] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.608 [2024-08-13 06:03:05.284707] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:03.608 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:03.867 [2024-08-13 06:03:05.484255] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.867 [2024-08-13 06:03:05.486190] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.867 [2024-08-13 06:03:05.486269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.867 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.126 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:04.126 "name": "Existed_Raid", 00:08:04.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.126 "strip_size_kb": 0, 00:08:04.126 "state": "configuring", 00:08:04.126 "raid_level": "raid1", 00:08:04.126 "superblock": false, 00:08:04.126 "num_base_bdevs": 2, 00:08:04.126 "num_base_bdevs_discovered": 1, 00:08:04.126 "num_base_bdevs_operational": 2, 00:08:04.126 "base_bdevs_list": [ 00:08:04.126 { 00:08:04.126 "name": "BaseBdev1", 00:08:04.126 "uuid": "89339296-4f32-4ad5-857d-a1ce8c20775a", 00:08:04.126 "is_configured": true, 00:08:04.126 "data_offset": 0, 00:08:04.126 "data_size": 65536 00:08:04.126 }, 00:08:04.126 { 00:08:04.126 "name": "BaseBdev2", 00:08:04.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.126 "is_configured": false, 00:08:04.126 "data_offset": 0, 00:08:04.126 "data_size": 0 00:08:04.126 } 00:08:04.126 ] 00:08:04.126 }' 00:08:04.126 06:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:04.126 06:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.694 [2024-08-13 06:03:06.389505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.694 [2024-08-13 06:03:06.389644] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:04.694 [2024-08-13 06:03:06.389687] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:04.694 [2024-08-13 06:03:06.390046] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:04.694 [2024-08-13 06:03:06.390248] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:04.694 [2024-08-13 06:03:06.390288] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:04.694 [2024-08-13 06:03:06.390516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.694 BaseBdev2 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:04.694 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:04.953 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.213 [ 00:08:05.213 { 00:08:05.213 "name": "BaseBdev2", 00:08:05.213 "aliases": [ 00:08:05.213 "3ba3fcf4-c956-4a68-99b9-aac0754f106d" 00:08:05.213 ], 00:08:05.213 "product_name": "Malloc disk", 00:08:05.213 "block_size": 512, 00:08:05.213 "num_blocks": 65536, 00:08:05.213 "uuid": "3ba3fcf4-c956-4a68-99b9-aac0754f106d", 00:08:05.213 "assigned_rate_limits": { 00:08:05.213 "rw_ios_per_sec": 0, 00:08:05.213 "rw_mbytes_per_sec": 0, 00:08:05.213 "r_mbytes_per_sec": 0, 00:08:05.213 "w_mbytes_per_sec": 0 00:08:05.213 }, 00:08:05.213 "claimed": true, 00:08:05.213 "claim_type": "exclusive_write", 00:08:05.213 "zoned": false, 00:08:05.213 "supported_io_types": { 00:08:05.213 "read": true, 00:08:05.213 "write": true, 00:08:05.213 "unmap": true, 00:08:05.213 "flush": true, 00:08:05.213 "reset": true, 00:08:05.213 "nvme_admin": false, 00:08:05.213 "nvme_io": false, 00:08:05.213 "nvme_io_md": false, 00:08:05.213 "write_zeroes": true, 00:08:05.213 "zcopy": true, 00:08:05.213 "get_zone_info": false, 00:08:05.213 "zone_management": false, 00:08:05.213 "zone_append": false, 00:08:05.213 "compare": false, 00:08:05.213 "compare_and_write": false, 00:08:05.213 "abort": true, 00:08:05.213 "seek_hole": false, 00:08:05.213 "seek_data": false, 00:08:05.213 "copy": true, 00:08:05.213 "nvme_iov_md": false 00:08:05.213 }, 00:08:05.213 "memory_domains": [ 00:08:05.213 { 00:08:05.213 "dma_device_id": "system", 00:08:05.213 "dma_device_type": 1 00:08:05.213 }, 00:08:05.213 { 00:08:05.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.213 "dma_device_type": 2 00:08:05.213 } 00:08:05.213 ], 00:08:05.213 "driver_specific": {} 00:08:05.213 } 00:08:05.213 ] 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.213 06:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.471 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.471 "name": "Existed_Raid", 00:08:05.471 "uuid": "a4e29459-1195-48a5-91a2-b0aefeb26957", 00:08:05.471 "strip_size_kb": 0, 00:08:05.471 "state": "online", 00:08:05.471 "raid_level": "raid1", 00:08:05.471 "superblock": false, 00:08:05.471 "num_base_bdevs": 2, 00:08:05.471 "num_base_bdevs_discovered": 2, 00:08:05.471 "num_base_bdevs_operational": 2, 00:08:05.471 "base_bdevs_list": [ 00:08:05.471 { 00:08:05.471 "name": "BaseBdev1", 00:08:05.471 "uuid": "89339296-4f32-4ad5-857d-a1ce8c20775a", 00:08:05.471 "is_configured": true, 00:08:05.471 "data_offset": 0, 00:08:05.471 "data_size": 65536 00:08:05.471 }, 00:08:05.471 { 00:08:05.471 "name": "BaseBdev2", 00:08:05.471 "uuid": "3ba3fcf4-c956-4a68-99b9-aac0754f106d", 00:08:05.471 "is_configured": true, 00:08:05.471 "data_offset": 0, 00:08:05.471 "data_size": 65536 00:08:05.471 } 00:08:05.471 ] 00:08:05.471 }' 00:08:05.471 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.471 06:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:05.729 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:05.988 [2024-08-13 06:03:07.679758] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.988 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:05.988 "name": "Existed_Raid", 00:08:05.988 "aliases": [ 00:08:05.988 "a4e29459-1195-48a5-91a2-b0aefeb26957" 00:08:05.988 ], 00:08:05.988 "product_name": "Raid Volume", 00:08:05.988 "block_size": 512, 00:08:05.988 "num_blocks": 65536, 00:08:05.988 "uuid": "a4e29459-1195-48a5-91a2-b0aefeb26957", 00:08:05.988 "assigned_rate_limits": { 00:08:05.988 "rw_ios_per_sec": 0, 00:08:05.988 "rw_mbytes_per_sec": 0, 00:08:05.988 "r_mbytes_per_sec": 0, 00:08:05.988 "w_mbytes_per_sec": 0 00:08:05.988 }, 00:08:05.988 "claimed": false, 00:08:05.988 "zoned": false, 00:08:05.988 "supported_io_types": { 00:08:05.988 "read": true, 00:08:05.988 "write": true, 00:08:05.988 "unmap": false, 00:08:05.988 "flush": false, 00:08:05.988 "reset": true, 00:08:05.988 "nvme_admin": false, 00:08:05.988 "nvme_io": false, 00:08:05.988 "nvme_io_md": false, 00:08:05.988 "write_zeroes": true, 00:08:05.988 "zcopy": false, 00:08:05.988 "get_zone_info": false, 00:08:05.988 "zone_management": false, 00:08:05.988 "zone_append": false, 00:08:05.988 "compare": false, 00:08:05.988 "compare_and_write": false, 00:08:05.988 "abort": false, 00:08:05.988 "seek_hole": false, 00:08:05.988 "seek_data": false, 00:08:05.988 "copy": false, 00:08:05.988 "nvme_iov_md": false 00:08:05.988 }, 00:08:05.988 "memory_domains": [ 00:08:05.988 { 00:08:05.988 "dma_device_id": "system", 00:08:05.988 "dma_device_type": 1 00:08:05.988 }, 00:08:05.988 { 00:08:05.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.988 "dma_device_type": 2 00:08:05.988 }, 00:08:05.988 { 00:08:05.988 "dma_device_id": "system", 00:08:05.988 "dma_device_type": 1 00:08:05.988 }, 00:08:05.988 { 00:08:05.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.988 "dma_device_type": 2 00:08:05.988 } 00:08:05.988 ], 00:08:05.988 "driver_specific": { 00:08:05.988 "raid": { 00:08:05.988 "uuid": "a4e29459-1195-48a5-91a2-b0aefeb26957", 00:08:05.988 "strip_size_kb": 0, 00:08:05.988 "state": "online", 00:08:05.988 "raid_level": "raid1", 00:08:05.988 "superblock": false, 00:08:05.988 "num_base_bdevs": 2, 00:08:05.988 "num_base_bdevs_discovered": 2, 00:08:05.988 "num_base_bdevs_operational": 2, 00:08:05.988 "base_bdevs_list": [ 00:08:05.988 { 00:08:05.988 "name": "BaseBdev1", 00:08:05.988 "uuid": "89339296-4f32-4ad5-857d-a1ce8c20775a", 00:08:05.988 "is_configured": true, 00:08:05.988 "data_offset": 0, 00:08:05.988 "data_size": 65536 00:08:05.988 }, 00:08:05.988 { 00:08:05.988 "name": "BaseBdev2", 00:08:05.988 "uuid": "3ba3fcf4-c956-4a68-99b9-aac0754f106d", 00:08:05.988 "is_configured": true, 00:08:05.988 "data_offset": 0, 00:08:05.988 "data_size": 65536 00:08:05.988 } 00:08:05.988 ] 00:08:05.988 } 00:08:05.988 } 00:08:05.988 }' 00:08:05.988 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.988 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:05.988 BaseBdev2' 00:08:05.988 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:05.988 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:05.988 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:06.247 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:06.247 "name": "BaseBdev1", 00:08:06.247 "aliases": [ 00:08:06.247 "89339296-4f32-4ad5-857d-a1ce8c20775a" 00:08:06.247 ], 00:08:06.247 "product_name": "Malloc disk", 00:08:06.247 "block_size": 512, 00:08:06.247 "num_blocks": 65536, 00:08:06.247 "uuid": "89339296-4f32-4ad5-857d-a1ce8c20775a", 00:08:06.247 "assigned_rate_limits": { 00:08:06.247 "rw_ios_per_sec": 0, 00:08:06.247 "rw_mbytes_per_sec": 0, 00:08:06.247 "r_mbytes_per_sec": 0, 00:08:06.247 "w_mbytes_per_sec": 0 00:08:06.247 }, 00:08:06.247 "claimed": true, 00:08:06.247 "claim_type": "exclusive_write", 00:08:06.247 "zoned": false, 00:08:06.247 "supported_io_types": { 00:08:06.247 "read": true, 00:08:06.247 "write": true, 00:08:06.247 "unmap": true, 00:08:06.247 "flush": true, 00:08:06.247 "reset": true, 00:08:06.247 "nvme_admin": false, 00:08:06.247 "nvme_io": false, 00:08:06.247 "nvme_io_md": false, 00:08:06.247 "write_zeroes": true, 00:08:06.247 "zcopy": true, 00:08:06.247 "get_zone_info": false, 00:08:06.247 "zone_management": false, 00:08:06.247 "zone_append": false, 00:08:06.247 "compare": false, 00:08:06.247 "compare_and_write": false, 00:08:06.247 "abort": true, 00:08:06.247 "seek_hole": false, 00:08:06.247 "seek_data": false, 00:08:06.247 "copy": true, 00:08:06.247 "nvme_iov_md": false 00:08:06.247 }, 00:08:06.247 "memory_domains": [ 00:08:06.247 { 00:08:06.247 "dma_device_id": "system", 00:08:06.247 "dma_device_type": 1 00:08:06.247 }, 00:08:06.247 { 00:08:06.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.247 "dma_device_type": 2 00:08:06.247 } 00:08:06.247 ], 00:08:06.247 "driver_specific": {} 00:08:06.247 }' 00:08:06.247 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.247 06:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.247 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:06.247 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:06.506 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:06.765 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:06.765 "name": "BaseBdev2", 00:08:06.765 "aliases": [ 00:08:06.765 "3ba3fcf4-c956-4a68-99b9-aac0754f106d" 00:08:06.765 ], 00:08:06.765 "product_name": "Malloc disk", 00:08:06.765 "block_size": 512, 00:08:06.765 "num_blocks": 65536, 00:08:06.765 "uuid": "3ba3fcf4-c956-4a68-99b9-aac0754f106d", 00:08:06.765 "assigned_rate_limits": { 00:08:06.765 "rw_ios_per_sec": 0, 00:08:06.765 "rw_mbytes_per_sec": 0, 00:08:06.765 "r_mbytes_per_sec": 0, 00:08:06.766 "w_mbytes_per_sec": 0 00:08:06.766 }, 00:08:06.766 "claimed": true, 00:08:06.766 "claim_type": "exclusive_write", 00:08:06.766 "zoned": false, 00:08:06.766 "supported_io_types": { 00:08:06.766 "read": true, 00:08:06.766 "write": true, 00:08:06.766 "unmap": true, 00:08:06.766 "flush": true, 00:08:06.766 "reset": true, 00:08:06.766 "nvme_admin": false, 00:08:06.766 "nvme_io": false, 00:08:06.766 "nvme_io_md": false, 00:08:06.766 "write_zeroes": true, 00:08:06.766 "zcopy": true, 00:08:06.766 "get_zone_info": false, 00:08:06.766 "zone_management": false, 00:08:06.766 "zone_append": false, 00:08:06.766 "compare": false, 00:08:06.766 "compare_and_write": false, 00:08:06.766 "abort": true, 00:08:06.766 "seek_hole": false, 00:08:06.766 "seek_data": false, 00:08:06.766 "copy": true, 00:08:06.766 "nvme_iov_md": false 00:08:06.766 }, 00:08:06.766 "memory_domains": [ 00:08:06.766 { 00:08:06.766 "dma_device_id": "system", 00:08:06.766 "dma_device_type": 1 00:08:06.766 }, 00:08:06.766 { 00:08:06.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.766 "dma_device_type": 2 00:08:06.766 } 00:08:06.766 ], 00:08:06.766 "driver_specific": {} 00:08:06.766 }' 00:08:06.766 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.766 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:07.024 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:07.025 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:07.025 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:07.025 06:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:07.284 [2024-08-13 06:03:08.981350] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.284 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.544 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:07.544 "name": "Existed_Raid", 00:08:07.544 "uuid": "a4e29459-1195-48a5-91a2-b0aefeb26957", 00:08:07.544 "strip_size_kb": 0, 00:08:07.544 "state": "online", 00:08:07.544 "raid_level": "raid1", 00:08:07.544 "superblock": false, 00:08:07.544 "num_base_bdevs": 2, 00:08:07.544 "num_base_bdevs_discovered": 1, 00:08:07.544 "num_base_bdevs_operational": 1, 00:08:07.544 "base_bdevs_list": [ 00:08:07.544 { 00:08:07.544 "name": null, 00:08:07.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.544 "is_configured": false, 00:08:07.544 "data_offset": 0, 00:08:07.544 "data_size": 65536 00:08:07.544 }, 00:08:07.544 { 00:08:07.544 "name": "BaseBdev2", 00:08:07.544 "uuid": "3ba3fcf4-c956-4a68-99b9-aac0754f106d", 00:08:07.544 "is_configured": true, 00:08:07.544 "data_offset": 0, 00:08:07.544 "data_size": 65536 00:08:07.544 } 00:08:07.544 ] 00:08:07.544 }' 00:08:07.544 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:07.544 06:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.111 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:08.111 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:08.111 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.111 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:08.369 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:08.369 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.369 06:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:08.627 [2024-08-13 06:03:10.174461] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.627 [2024-08-13 06:03:10.174654] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.627 [2024-08-13 06:03:10.186088] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.627 [2024-08-13 06:03:10.186200] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.627 [2024-08-13 06:03:10.186214] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:08.627 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:08.627 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:08.627 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.627 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.627 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 73675 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 73675 ']' 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 73675 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.628 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73675 00:08:08.887 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.887 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.887 killing process with pid 73675 00:08:08.887 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73675' 00:08:08.887 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 73675 00:08:08.887 [2024-08-13 06:03:10.449672] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.887 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 73675 00:08:08.887 [2024-08-13 06:03:10.450660] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:09.147 00:08:09.147 real 0m9.238s 00:08:09.147 user 0m16.516s 00:08:09.147 sys 0m1.422s 00:08:09.147 ************************************ 00:08:09.147 END TEST raid_state_function_test 00:08:09.147 ************************************ 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.147 06:03:10 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:09.147 06:03:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:09.147 06:03:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.147 06:03:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.147 ************************************ 00:08:09.147 START TEST raid_state_function_test_sb 00:08:09.147 ************************************ 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=74015 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 74015' 00:08:09.147 Process raid pid: 74015 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 74015 /var/tmp/spdk-raid.sock 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 74015 ']' 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:09.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.147 06:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.147 [2024-08-13 06:03:10.846466] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:08:09.147 [2024-08-13 06:03:10.846693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.406 [2024-08-13 06:03:10.991367] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.406 [2024-08-13 06:03:11.036181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.406 [2024-08-13 06:03:11.079379] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.406 [2024-08-13 06:03:11.079490] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.974 06:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:09.974 06:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:08:09.974 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:10.232 [2024-08-13 06:03:11.836003] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.232 [2024-08-13 06:03:11.836152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.232 [2024-08-13 06:03:11.836171] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.232 [2024-08-13 06:03:11.836179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.232 06:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.491 06:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:10.491 "name": "Existed_Raid", 00:08:10.491 "uuid": "c872ee04-da6c-4cb8-b138-ec1bc6347886", 00:08:10.491 "strip_size_kb": 0, 00:08:10.491 "state": "configuring", 00:08:10.491 "raid_level": "raid1", 00:08:10.491 "superblock": true, 00:08:10.491 "num_base_bdevs": 2, 00:08:10.491 "num_base_bdevs_discovered": 0, 00:08:10.491 "num_base_bdevs_operational": 2, 00:08:10.491 "base_bdevs_list": [ 00:08:10.491 { 00:08:10.491 "name": "BaseBdev1", 00:08:10.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.491 "is_configured": false, 00:08:10.491 "data_offset": 0, 00:08:10.491 "data_size": 0 00:08:10.491 }, 00:08:10.491 { 00:08:10.491 "name": "BaseBdev2", 00:08:10.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.491 "is_configured": false, 00:08:10.491 "data_offset": 0, 00:08:10.491 "data_size": 0 00:08:10.491 } 00:08:10.491 ] 00:08:10.491 }' 00:08:10.491 06:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:10.491 06:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.058 06:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:11.058 [2024-08-13 06:03:12.754175] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.058 [2024-08-13 06:03:12.754282] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:11.058 06:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:11.317 [2024-08-13 06:03:12.965855] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.318 [2024-08-13 06:03:12.965977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.318 [2024-08-13 06:03:12.966021] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.318 [2024-08-13 06:03:12.966053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.318 06:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.577 [2024-08-13 06:03:13.158365] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.577 BaseBdev1 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:11.577 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.836 [ 00:08:11.836 { 00:08:11.836 "name": "BaseBdev1", 00:08:11.836 "aliases": [ 00:08:11.836 "1731e648-2433-4c09-b690-94b1ee149a3e" 00:08:11.836 ], 00:08:11.836 "product_name": "Malloc disk", 00:08:11.836 "block_size": 512, 00:08:11.836 "num_blocks": 65536, 00:08:11.836 "uuid": "1731e648-2433-4c09-b690-94b1ee149a3e", 00:08:11.836 "assigned_rate_limits": { 00:08:11.836 "rw_ios_per_sec": 0, 00:08:11.836 "rw_mbytes_per_sec": 0, 00:08:11.836 "r_mbytes_per_sec": 0, 00:08:11.836 "w_mbytes_per_sec": 0 00:08:11.836 }, 00:08:11.836 "claimed": true, 00:08:11.836 "claim_type": "exclusive_write", 00:08:11.836 "zoned": false, 00:08:11.836 "supported_io_types": { 00:08:11.836 "read": true, 00:08:11.836 "write": true, 00:08:11.836 "unmap": true, 00:08:11.836 "flush": true, 00:08:11.836 "reset": true, 00:08:11.836 "nvme_admin": false, 00:08:11.836 "nvme_io": false, 00:08:11.836 "nvme_io_md": false, 00:08:11.836 "write_zeroes": true, 00:08:11.836 "zcopy": true, 00:08:11.836 "get_zone_info": false, 00:08:11.836 "zone_management": false, 00:08:11.836 "zone_append": false, 00:08:11.836 "compare": false, 00:08:11.836 "compare_and_write": false, 00:08:11.836 "abort": true, 00:08:11.836 "seek_hole": false, 00:08:11.836 "seek_data": false, 00:08:11.836 "copy": true, 00:08:11.836 "nvme_iov_md": false 00:08:11.836 }, 00:08:11.836 "memory_domains": [ 00:08:11.836 { 00:08:11.836 "dma_device_id": "system", 00:08:11.836 "dma_device_type": 1 00:08:11.836 }, 00:08:11.836 { 00:08:11.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.836 "dma_device_type": 2 00:08:11.836 } 00:08:11.836 ], 00:08:11.836 "driver_specific": {} 00:08:11.836 } 00:08:11.836 ] 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.836 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.134 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:12.134 "name": "Existed_Raid", 00:08:12.134 "uuid": "e33a6ad1-0eea-4503-872e-bcb0c34f58a3", 00:08:12.134 "strip_size_kb": 0, 00:08:12.134 "state": "configuring", 00:08:12.134 "raid_level": "raid1", 00:08:12.134 "superblock": true, 00:08:12.134 "num_base_bdevs": 2, 00:08:12.134 "num_base_bdevs_discovered": 1, 00:08:12.134 "num_base_bdevs_operational": 2, 00:08:12.134 "base_bdevs_list": [ 00:08:12.134 { 00:08:12.134 "name": "BaseBdev1", 00:08:12.134 "uuid": "1731e648-2433-4c09-b690-94b1ee149a3e", 00:08:12.134 "is_configured": true, 00:08:12.134 "data_offset": 2048, 00:08:12.134 "data_size": 63488 00:08:12.134 }, 00:08:12.134 { 00:08:12.134 "name": "BaseBdev2", 00:08:12.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.134 "is_configured": false, 00:08:12.134 "data_offset": 0, 00:08:12.134 "data_size": 0 00:08:12.134 } 00:08:12.134 ] 00:08:12.134 }' 00:08:12.134 06:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:12.134 06:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.709 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:12.709 [2024-08-13 06:03:14.496136] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.709 [2024-08-13 06:03:14.496274] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:12.967 [2024-08-13 06:03:14.699862] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.967 [2024-08-13 06:03:14.701988] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.967 [2024-08-13 06:03:14.702093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.967 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.226 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:13.226 "name": "Existed_Raid", 00:08:13.226 "uuid": "846cf5ed-f1fe-4939-a20f-797f76d38109", 00:08:13.226 "strip_size_kb": 0, 00:08:13.226 "state": "configuring", 00:08:13.226 "raid_level": "raid1", 00:08:13.226 "superblock": true, 00:08:13.226 "num_base_bdevs": 2, 00:08:13.226 "num_base_bdevs_discovered": 1, 00:08:13.226 "num_base_bdevs_operational": 2, 00:08:13.226 "base_bdevs_list": [ 00:08:13.226 { 00:08:13.226 "name": "BaseBdev1", 00:08:13.226 "uuid": "1731e648-2433-4c09-b690-94b1ee149a3e", 00:08:13.226 "is_configured": true, 00:08:13.226 "data_offset": 2048, 00:08:13.226 "data_size": 63488 00:08:13.226 }, 00:08:13.226 { 00:08:13.226 "name": "BaseBdev2", 00:08:13.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.226 "is_configured": false, 00:08:13.226 "data_offset": 0, 00:08:13.226 "data_size": 0 00:08:13.226 } 00:08:13.226 ] 00:08:13.226 }' 00:08:13.226 06:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:13.226 06:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.792 06:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.051 [2024-08-13 06:03:15.660696] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.051 [2024-08-13 06:03:15.661023] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:14.051 [2024-08-13 06:03:15.661103] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.051 [2024-08-13 06:03:15.661491] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:14.051 [2024-08-13 06:03:15.661696] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:14.051 [2024-08-13 06:03:15.661746] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:14.051 [2024-08-13 06:03:15.661944] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.051 BaseBdev2 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:14.051 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:14.315 06:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.315 [ 00:08:14.315 { 00:08:14.315 "name": "BaseBdev2", 00:08:14.315 "aliases": [ 00:08:14.315 "4a37636e-a66f-4104-8292-14e5daa3c3df" 00:08:14.315 ], 00:08:14.315 "product_name": "Malloc disk", 00:08:14.315 "block_size": 512, 00:08:14.315 "num_blocks": 65536, 00:08:14.315 "uuid": "4a37636e-a66f-4104-8292-14e5daa3c3df", 00:08:14.315 "assigned_rate_limits": { 00:08:14.315 "rw_ios_per_sec": 0, 00:08:14.315 "rw_mbytes_per_sec": 0, 00:08:14.315 "r_mbytes_per_sec": 0, 00:08:14.315 "w_mbytes_per_sec": 0 00:08:14.315 }, 00:08:14.315 "claimed": true, 00:08:14.315 "claim_type": "exclusive_write", 00:08:14.315 "zoned": false, 00:08:14.315 "supported_io_types": { 00:08:14.315 "read": true, 00:08:14.315 "write": true, 00:08:14.315 "unmap": true, 00:08:14.315 "flush": true, 00:08:14.315 "reset": true, 00:08:14.315 "nvme_admin": false, 00:08:14.315 "nvme_io": false, 00:08:14.315 "nvme_io_md": false, 00:08:14.315 "write_zeroes": true, 00:08:14.315 "zcopy": true, 00:08:14.315 "get_zone_info": false, 00:08:14.315 "zone_management": false, 00:08:14.315 "zone_append": false, 00:08:14.315 "compare": false, 00:08:14.315 "compare_and_write": false, 00:08:14.315 "abort": true, 00:08:14.315 "seek_hole": false, 00:08:14.315 "seek_data": false, 00:08:14.315 "copy": true, 00:08:14.315 "nvme_iov_md": false 00:08:14.315 }, 00:08:14.315 "memory_domains": [ 00:08:14.315 { 00:08:14.315 "dma_device_id": "system", 00:08:14.315 "dma_device_type": 1 00:08:14.315 }, 00:08:14.315 { 00:08:14.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.315 "dma_device_type": 2 00:08:14.315 } 00:08:14.315 ], 00:08:14.315 "driver_specific": {} 00:08:14.315 } 00:08:14.315 ] 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.315 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.574 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.575 "name": "Existed_Raid", 00:08:14.575 "uuid": "846cf5ed-f1fe-4939-a20f-797f76d38109", 00:08:14.575 "strip_size_kb": 0, 00:08:14.575 "state": "online", 00:08:14.575 "raid_level": "raid1", 00:08:14.575 "superblock": true, 00:08:14.575 "num_base_bdevs": 2, 00:08:14.575 "num_base_bdevs_discovered": 2, 00:08:14.575 "num_base_bdevs_operational": 2, 00:08:14.575 "base_bdevs_list": [ 00:08:14.575 { 00:08:14.575 "name": "BaseBdev1", 00:08:14.575 "uuid": "1731e648-2433-4c09-b690-94b1ee149a3e", 00:08:14.575 "is_configured": true, 00:08:14.575 "data_offset": 2048, 00:08:14.575 "data_size": 63488 00:08:14.575 }, 00:08:14.575 { 00:08:14.575 "name": "BaseBdev2", 00:08:14.575 "uuid": "4a37636e-a66f-4104-8292-14e5daa3c3df", 00:08:14.575 "is_configured": true, 00:08:14.575 "data_offset": 2048, 00:08:14.575 "data_size": 63488 00:08:14.575 } 00:08:14.575 ] 00:08:14.575 }' 00:08:14.575 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.575 06:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.142 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.142 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:15.142 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:15.142 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:15.142 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:15.143 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:15.143 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:15.143 06:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:15.402 [2024-08-13 06:03:17.030685] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.402 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:15.402 "name": "Existed_Raid", 00:08:15.402 "aliases": [ 00:08:15.402 "846cf5ed-f1fe-4939-a20f-797f76d38109" 00:08:15.402 ], 00:08:15.402 "product_name": "Raid Volume", 00:08:15.402 "block_size": 512, 00:08:15.402 "num_blocks": 63488, 00:08:15.402 "uuid": "846cf5ed-f1fe-4939-a20f-797f76d38109", 00:08:15.402 "assigned_rate_limits": { 00:08:15.402 "rw_ios_per_sec": 0, 00:08:15.402 "rw_mbytes_per_sec": 0, 00:08:15.402 "r_mbytes_per_sec": 0, 00:08:15.402 "w_mbytes_per_sec": 0 00:08:15.402 }, 00:08:15.402 "claimed": false, 00:08:15.402 "zoned": false, 00:08:15.402 "supported_io_types": { 00:08:15.402 "read": true, 00:08:15.402 "write": true, 00:08:15.402 "unmap": false, 00:08:15.402 "flush": false, 00:08:15.402 "reset": true, 00:08:15.402 "nvme_admin": false, 00:08:15.402 "nvme_io": false, 00:08:15.402 "nvme_io_md": false, 00:08:15.402 "write_zeroes": true, 00:08:15.402 "zcopy": false, 00:08:15.402 "get_zone_info": false, 00:08:15.402 "zone_management": false, 00:08:15.402 "zone_append": false, 00:08:15.402 "compare": false, 00:08:15.402 "compare_and_write": false, 00:08:15.402 "abort": false, 00:08:15.402 "seek_hole": false, 00:08:15.402 "seek_data": false, 00:08:15.402 "copy": false, 00:08:15.402 "nvme_iov_md": false 00:08:15.402 }, 00:08:15.402 "memory_domains": [ 00:08:15.402 { 00:08:15.402 "dma_device_id": "system", 00:08:15.402 "dma_device_type": 1 00:08:15.402 }, 00:08:15.402 { 00:08:15.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.402 "dma_device_type": 2 00:08:15.402 }, 00:08:15.402 { 00:08:15.402 "dma_device_id": "system", 00:08:15.402 "dma_device_type": 1 00:08:15.402 }, 00:08:15.402 { 00:08:15.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.402 "dma_device_type": 2 00:08:15.402 } 00:08:15.402 ], 00:08:15.402 "driver_specific": { 00:08:15.402 "raid": { 00:08:15.402 "uuid": "846cf5ed-f1fe-4939-a20f-797f76d38109", 00:08:15.402 "strip_size_kb": 0, 00:08:15.402 "state": "online", 00:08:15.402 "raid_level": "raid1", 00:08:15.402 "superblock": true, 00:08:15.402 "num_base_bdevs": 2, 00:08:15.402 "num_base_bdevs_discovered": 2, 00:08:15.402 "num_base_bdevs_operational": 2, 00:08:15.402 "base_bdevs_list": [ 00:08:15.402 { 00:08:15.402 "name": "BaseBdev1", 00:08:15.402 "uuid": "1731e648-2433-4c09-b690-94b1ee149a3e", 00:08:15.402 "is_configured": true, 00:08:15.402 "data_offset": 2048, 00:08:15.402 "data_size": 63488 00:08:15.402 }, 00:08:15.402 { 00:08:15.402 "name": "BaseBdev2", 00:08:15.403 "uuid": "4a37636e-a66f-4104-8292-14e5daa3c3df", 00:08:15.403 "is_configured": true, 00:08:15.403 "data_offset": 2048, 00:08:15.403 "data_size": 63488 00:08:15.403 } 00:08:15.403 ] 00:08:15.403 } 00:08:15.403 } 00:08:15.403 }' 00:08:15.403 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.403 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:15.403 BaseBdev2' 00:08:15.403 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:15.403 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:15.403 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:15.662 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:15.662 "name": "BaseBdev1", 00:08:15.662 "aliases": [ 00:08:15.662 "1731e648-2433-4c09-b690-94b1ee149a3e" 00:08:15.662 ], 00:08:15.662 "product_name": "Malloc disk", 00:08:15.662 "block_size": 512, 00:08:15.662 "num_blocks": 65536, 00:08:15.662 "uuid": "1731e648-2433-4c09-b690-94b1ee149a3e", 00:08:15.662 "assigned_rate_limits": { 00:08:15.662 "rw_ios_per_sec": 0, 00:08:15.662 "rw_mbytes_per_sec": 0, 00:08:15.662 "r_mbytes_per_sec": 0, 00:08:15.662 "w_mbytes_per_sec": 0 00:08:15.662 }, 00:08:15.662 "claimed": true, 00:08:15.662 "claim_type": "exclusive_write", 00:08:15.662 "zoned": false, 00:08:15.662 "supported_io_types": { 00:08:15.662 "read": true, 00:08:15.662 "write": true, 00:08:15.662 "unmap": true, 00:08:15.662 "flush": true, 00:08:15.662 "reset": true, 00:08:15.662 "nvme_admin": false, 00:08:15.662 "nvme_io": false, 00:08:15.662 "nvme_io_md": false, 00:08:15.662 "write_zeroes": true, 00:08:15.662 "zcopy": true, 00:08:15.662 "get_zone_info": false, 00:08:15.662 "zone_management": false, 00:08:15.662 "zone_append": false, 00:08:15.662 "compare": false, 00:08:15.662 "compare_and_write": false, 00:08:15.662 "abort": true, 00:08:15.662 "seek_hole": false, 00:08:15.662 "seek_data": false, 00:08:15.662 "copy": true, 00:08:15.662 "nvme_iov_md": false 00:08:15.662 }, 00:08:15.662 "memory_domains": [ 00:08:15.662 { 00:08:15.662 "dma_device_id": "system", 00:08:15.662 "dma_device_type": 1 00:08:15.662 }, 00:08:15.662 { 00:08:15.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.662 "dma_device_type": 2 00:08:15.662 } 00:08:15.662 ], 00:08:15.662 "driver_specific": {} 00:08:15.662 }' 00:08:15.662 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.662 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.662 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:15.662 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.662 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:15.922 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:16.182 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:16.182 "name": "BaseBdev2", 00:08:16.182 "aliases": [ 00:08:16.182 "4a37636e-a66f-4104-8292-14e5daa3c3df" 00:08:16.182 ], 00:08:16.182 "product_name": "Malloc disk", 00:08:16.182 "block_size": 512, 00:08:16.182 "num_blocks": 65536, 00:08:16.182 "uuid": "4a37636e-a66f-4104-8292-14e5daa3c3df", 00:08:16.182 "assigned_rate_limits": { 00:08:16.182 "rw_ios_per_sec": 0, 00:08:16.182 "rw_mbytes_per_sec": 0, 00:08:16.182 "r_mbytes_per_sec": 0, 00:08:16.182 "w_mbytes_per_sec": 0 00:08:16.182 }, 00:08:16.182 "claimed": true, 00:08:16.182 "claim_type": "exclusive_write", 00:08:16.182 "zoned": false, 00:08:16.182 "supported_io_types": { 00:08:16.182 "read": true, 00:08:16.182 "write": true, 00:08:16.182 "unmap": true, 00:08:16.182 "flush": true, 00:08:16.182 "reset": true, 00:08:16.182 "nvme_admin": false, 00:08:16.182 "nvme_io": false, 00:08:16.182 "nvme_io_md": false, 00:08:16.182 "write_zeroes": true, 00:08:16.182 "zcopy": true, 00:08:16.182 "get_zone_info": false, 00:08:16.182 "zone_management": false, 00:08:16.182 "zone_append": false, 00:08:16.182 "compare": false, 00:08:16.182 "compare_and_write": false, 00:08:16.182 "abort": true, 00:08:16.182 "seek_hole": false, 00:08:16.182 "seek_data": false, 00:08:16.182 "copy": true, 00:08:16.182 "nvme_iov_md": false 00:08:16.182 }, 00:08:16.182 "memory_domains": [ 00:08:16.183 { 00:08:16.183 "dma_device_id": "system", 00:08:16.183 "dma_device_type": 1 00:08:16.183 }, 00:08:16.183 { 00:08:16.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.183 "dma_device_type": 2 00:08:16.183 } 00:08:16.183 ], 00:08:16.183 "driver_specific": {} 00:08:16.183 }' 00:08:16.183 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.183 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.183 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:16.183 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.183 06:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:16.441 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:16.700 [2024-08-13 06:03:18.400179] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.700 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.960 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:16.960 "name": "Existed_Raid", 00:08:16.960 "uuid": "846cf5ed-f1fe-4939-a20f-797f76d38109", 00:08:16.960 "strip_size_kb": 0, 00:08:16.960 "state": "online", 00:08:16.960 "raid_level": "raid1", 00:08:16.960 "superblock": true, 00:08:16.960 "num_base_bdevs": 2, 00:08:16.960 "num_base_bdevs_discovered": 1, 00:08:16.960 "num_base_bdevs_operational": 1, 00:08:16.960 "base_bdevs_list": [ 00:08:16.960 { 00:08:16.960 "name": null, 00:08:16.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.960 "is_configured": false, 00:08:16.960 "data_offset": 2048, 00:08:16.960 "data_size": 63488 00:08:16.960 }, 00:08:16.960 { 00:08:16.960 "name": "BaseBdev2", 00:08:16.960 "uuid": "4a37636e-a66f-4104-8292-14e5daa3c3df", 00:08:16.960 "is_configured": true, 00:08:16.960 "data_offset": 2048, 00:08:16.960 "data_size": 63488 00:08:16.960 } 00:08:16.960 ] 00:08:16.960 }' 00:08:16.960 06:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:16.960 06:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.528 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:17.528 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:17.528 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.528 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:17.788 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:17.788 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.788 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:17.788 [2024-08-13 06:03:19.565458] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.788 [2024-08-13 06:03:19.565665] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.788 [2024-08-13 06:03:19.577136] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.788 [2024-08-13 06:03:19.577262] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.788 [2024-08-13 06:03:19.577307] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 74015 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 74015 ']' 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 74015 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:18.048 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74015 00:08:18.315 killing process with pid 74015 00:08:18.315 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:18.315 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:18.315 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74015' 00:08:18.315 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 74015 00:08:18.315 [2024-08-13 06:03:19.841409] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.315 06:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 74015 00:08:18.315 [2024-08-13 06:03:19.842454] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.315 ************************************ 00:08:18.315 END TEST raid_state_function_test_sb 00:08:18.315 ************************************ 00:08:18.315 06:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:18.315 00:08:18.315 real 0m9.316s 00:08:18.315 user 0m16.652s 00:08:18.315 sys 0m1.439s 00:08:18.315 06:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.315 06:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 06:03:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:18.575 06:03:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:18.575 06:03:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.575 06:03:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.575 ************************************ 00:08:18.575 START TEST raid_superblock_test 00:08:18.575 ************************************ 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=74354 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 74354 /var/tmp/spdk-raid.sock 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 74354 ']' 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:18.575 06:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.575 [2024-08-13 06:03:20.214672] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:08:18.575 [2024-08-13 06:03:20.214871] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74354 ] 00:08:18.575 [2024-08-13 06:03:20.359838] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.834 [2024-08-13 06:03:20.407571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.834 [2024-08-13 06:03:20.450473] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.834 [2024-08-13 06:03:20.450518] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.404 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:19.664 malloc1 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.664 [2024-08-13 06:03:21.427927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.664 [2024-08-13 06:03:21.428112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.664 [2024-08-13 06:03:21.428163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:19.664 [2024-08-13 06:03:21.428205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.664 [2024-08-13 06:03:21.430541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.664 [2024-08-13 06:03:21.430620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.664 pt1 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.664 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:19.924 malloc2 00:08:19.924 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.183 [2024-08-13 06:03:21.832024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.183 [2024-08-13 06:03:21.832125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.183 [2024-08-13 06:03:21.832146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.183 [2024-08-13 06:03:21.832155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.183 [2024-08-13 06:03:21.834269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.183 [2024-08-13 06:03:21.834307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.183 pt2 00:08:20.183 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:08:20.183 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:20.183 06:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:20.442 [2024-08-13 06:03:22.007775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:20.442 [2024-08-13 06:03:22.009780] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.442 [2024-08-13 06:03:22.010018] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:20.442 [2024-08-13 06:03:22.010094] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.442 [2024-08-13 06:03:22.010420] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:20.442 [2024-08-13 06:03:22.010607] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:20.442 [2024-08-13 06:03:22.010651] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:20.442 [2024-08-13 06:03:22.010838] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:20.442 "name": "raid_bdev1", 00:08:20.442 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:20.442 "strip_size_kb": 0, 00:08:20.442 "state": "online", 00:08:20.442 "raid_level": "raid1", 00:08:20.442 "superblock": true, 00:08:20.442 "num_base_bdevs": 2, 00:08:20.442 "num_base_bdevs_discovered": 2, 00:08:20.442 "num_base_bdevs_operational": 2, 00:08:20.442 "base_bdevs_list": [ 00:08:20.442 { 00:08:20.442 "name": "pt1", 00:08:20.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.442 "is_configured": true, 00:08:20.442 "data_offset": 2048, 00:08:20.442 "data_size": 63488 00:08:20.442 }, 00:08:20.442 { 00:08:20.442 "name": "pt2", 00:08:20.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.442 "is_configured": true, 00:08:20.442 "data_offset": 2048, 00:08:20.442 "data_size": 63488 00:08:20.442 } 00:08:20.442 ] 00:08:20.442 }' 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:20.442 06:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:21.011 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:21.271 [2024-08-13 06:03:22.946477] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.271 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:21.271 "name": "raid_bdev1", 00:08:21.271 "aliases": [ 00:08:21.271 "6c00a9fa-475b-47dc-97b1-27096620eb4a" 00:08:21.271 ], 00:08:21.271 "product_name": "Raid Volume", 00:08:21.271 "block_size": 512, 00:08:21.271 "num_blocks": 63488, 00:08:21.271 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:21.271 "assigned_rate_limits": { 00:08:21.271 "rw_ios_per_sec": 0, 00:08:21.271 "rw_mbytes_per_sec": 0, 00:08:21.271 "r_mbytes_per_sec": 0, 00:08:21.271 "w_mbytes_per_sec": 0 00:08:21.271 }, 00:08:21.271 "claimed": false, 00:08:21.271 "zoned": false, 00:08:21.271 "supported_io_types": { 00:08:21.271 "read": true, 00:08:21.271 "write": true, 00:08:21.271 "unmap": false, 00:08:21.271 "flush": false, 00:08:21.271 "reset": true, 00:08:21.271 "nvme_admin": false, 00:08:21.271 "nvme_io": false, 00:08:21.271 "nvme_io_md": false, 00:08:21.271 "write_zeroes": true, 00:08:21.271 "zcopy": false, 00:08:21.271 "get_zone_info": false, 00:08:21.271 "zone_management": false, 00:08:21.271 "zone_append": false, 00:08:21.271 "compare": false, 00:08:21.271 "compare_and_write": false, 00:08:21.271 "abort": false, 00:08:21.271 "seek_hole": false, 00:08:21.271 "seek_data": false, 00:08:21.271 "copy": false, 00:08:21.271 "nvme_iov_md": false 00:08:21.271 }, 00:08:21.271 "memory_domains": [ 00:08:21.271 { 00:08:21.271 "dma_device_id": "system", 00:08:21.271 "dma_device_type": 1 00:08:21.271 }, 00:08:21.271 { 00:08:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.271 "dma_device_type": 2 00:08:21.271 }, 00:08:21.271 { 00:08:21.271 "dma_device_id": "system", 00:08:21.271 "dma_device_type": 1 00:08:21.271 }, 00:08:21.271 { 00:08:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.271 "dma_device_type": 2 00:08:21.271 } 00:08:21.271 ], 00:08:21.271 "driver_specific": { 00:08:21.271 "raid": { 00:08:21.271 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:21.271 "strip_size_kb": 0, 00:08:21.271 "state": "online", 00:08:21.271 "raid_level": "raid1", 00:08:21.271 "superblock": true, 00:08:21.271 "num_base_bdevs": 2, 00:08:21.271 "num_base_bdevs_discovered": 2, 00:08:21.271 "num_base_bdevs_operational": 2, 00:08:21.271 "base_bdevs_list": [ 00:08:21.271 { 00:08:21.271 "name": "pt1", 00:08:21.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.271 "is_configured": true, 00:08:21.271 "data_offset": 2048, 00:08:21.271 "data_size": 63488 00:08:21.271 }, 00:08:21.271 { 00:08:21.271 "name": "pt2", 00:08:21.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.271 "is_configured": true, 00:08:21.271 "data_offset": 2048, 00:08:21.271 "data_size": 63488 00:08:21.271 } 00:08:21.271 ] 00:08:21.271 } 00:08:21.271 } 00:08:21.271 }' 00:08:21.271 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.271 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:21.271 pt2' 00:08:21.271 06:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:21.271 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:21.271 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:21.530 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:21.530 "name": "pt1", 00:08:21.530 "aliases": [ 00:08:21.530 "00000000-0000-0000-0000-000000000001" 00:08:21.530 ], 00:08:21.530 "product_name": "passthru", 00:08:21.530 "block_size": 512, 00:08:21.530 "num_blocks": 65536, 00:08:21.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.530 "assigned_rate_limits": { 00:08:21.530 "rw_ios_per_sec": 0, 00:08:21.530 "rw_mbytes_per_sec": 0, 00:08:21.530 "r_mbytes_per_sec": 0, 00:08:21.530 "w_mbytes_per_sec": 0 00:08:21.530 }, 00:08:21.530 "claimed": true, 00:08:21.530 "claim_type": "exclusive_write", 00:08:21.530 "zoned": false, 00:08:21.530 "supported_io_types": { 00:08:21.530 "read": true, 00:08:21.530 "write": true, 00:08:21.530 "unmap": true, 00:08:21.530 "flush": true, 00:08:21.530 "reset": true, 00:08:21.530 "nvme_admin": false, 00:08:21.530 "nvme_io": false, 00:08:21.530 "nvme_io_md": false, 00:08:21.530 "write_zeroes": true, 00:08:21.530 "zcopy": true, 00:08:21.530 "get_zone_info": false, 00:08:21.530 "zone_management": false, 00:08:21.530 "zone_append": false, 00:08:21.530 "compare": false, 00:08:21.530 "compare_and_write": false, 00:08:21.530 "abort": true, 00:08:21.530 "seek_hole": false, 00:08:21.530 "seek_data": false, 00:08:21.530 "copy": true, 00:08:21.530 "nvme_iov_md": false 00:08:21.530 }, 00:08:21.530 "memory_domains": [ 00:08:21.530 { 00:08:21.530 "dma_device_id": "system", 00:08:21.530 "dma_device_type": 1 00:08:21.530 }, 00:08:21.530 { 00:08:21.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.530 "dma_device_type": 2 00:08:21.530 } 00:08:21.530 ], 00:08:21.530 "driver_specific": { 00:08:21.530 "passthru": { 00:08:21.530 "name": "pt1", 00:08:21.530 "base_bdev_name": "malloc1" 00:08:21.530 } 00:08:21.530 } 00:08:21.530 }' 00:08:21.530 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:21.530 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:21.530 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:21.530 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:21.790 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:22.051 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:22.051 "name": "pt2", 00:08:22.051 "aliases": [ 00:08:22.051 "00000000-0000-0000-0000-000000000002" 00:08:22.051 ], 00:08:22.051 "product_name": "passthru", 00:08:22.051 "block_size": 512, 00:08:22.051 "num_blocks": 65536, 00:08:22.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.052 "assigned_rate_limits": { 00:08:22.052 "rw_ios_per_sec": 0, 00:08:22.052 "rw_mbytes_per_sec": 0, 00:08:22.052 "r_mbytes_per_sec": 0, 00:08:22.052 "w_mbytes_per_sec": 0 00:08:22.052 }, 00:08:22.052 "claimed": true, 00:08:22.052 "claim_type": "exclusive_write", 00:08:22.052 "zoned": false, 00:08:22.052 "supported_io_types": { 00:08:22.052 "read": true, 00:08:22.052 "write": true, 00:08:22.052 "unmap": true, 00:08:22.052 "flush": true, 00:08:22.052 "reset": true, 00:08:22.052 "nvme_admin": false, 00:08:22.052 "nvme_io": false, 00:08:22.052 "nvme_io_md": false, 00:08:22.052 "write_zeroes": true, 00:08:22.052 "zcopy": true, 00:08:22.052 "get_zone_info": false, 00:08:22.052 "zone_management": false, 00:08:22.052 "zone_append": false, 00:08:22.052 "compare": false, 00:08:22.052 "compare_and_write": false, 00:08:22.052 "abort": true, 00:08:22.052 "seek_hole": false, 00:08:22.052 "seek_data": false, 00:08:22.052 "copy": true, 00:08:22.052 "nvme_iov_md": false 00:08:22.052 }, 00:08:22.052 "memory_domains": [ 00:08:22.052 { 00:08:22.052 "dma_device_id": "system", 00:08:22.053 "dma_device_type": 1 00:08:22.053 }, 00:08:22.053 { 00:08:22.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.053 "dma_device_type": 2 00:08:22.053 } 00:08:22.053 ], 00:08:22.053 "driver_specific": { 00:08:22.053 "passthru": { 00:08:22.053 "name": "pt2", 00:08:22.053 "base_bdev_name": "malloc2" 00:08:22.053 } 00:08:22.053 } 00:08:22.053 }' 00:08:22.053 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:22.053 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:22.053 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:22.053 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:22.313 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:22.313 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:22.313 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:22.313 06:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:22.313 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:22.313 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:22.313 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:22.313 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:22.313 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:22.313 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:08:22.572 [2024-08-13 06:03:24.240143] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.572 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=6c00a9fa-475b-47dc-97b1-27096620eb4a 00:08:22.572 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 6c00a9fa-475b-47dc-97b1-27096620eb4a ']' 00:08:22.572 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:22.831 [2024-08-13 06:03:24.447525] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.831 [2024-08-13 06:03:24.447602] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.831 [2024-08-13 06:03:24.447720] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.831 [2024-08-13 06:03:24.447824] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.831 [2024-08-13 06:03:24.447898] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:22.831 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.831 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:08:23.091 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:08:23.091 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:08:23.091 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.091 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:23.091 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.091 06:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:23.350 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:23.350 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:23.610 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:23.869 [2024-08-13 06:03:25.433808] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:23.869 [2024-08-13 06:03:25.435606] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:23.869 [2024-08-13 06:03:25.435713] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:23.869 [2024-08-13 06:03:25.435801] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:23.869 [2024-08-13 06:03:25.435875] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.869 [2024-08-13 06:03:25.435924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:23.869 request: 00:08:23.869 { 00:08:23.869 "name": "raid_bdev1", 00:08:23.869 "raid_level": "raid1", 00:08:23.869 "base_bdevs": [ 00:08:23.869 "malloc1", 00:08:23.869 "malloc2" 00:08:23.869 ], 00:08:23.869 "superblock": false, 00:08:23.869 "method": "bdev_raid_create", 00:08:23.869 "req_id": 1 00:08:23.869 } 00:08:23.869 Got JSON-RPC error response 00:08:23.869 response: 00:08:23.869 { 00:08:23.869 "code": -17, 00:08:23.869 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:23.869 } 00:08:23.869 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:08:23.869 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:08:23.870 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:08:23.870 06:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:08:23.870 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:08:23.870 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.870 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:08:23.870 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:08:24.129 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.129 [2024-08-13 06:03:25.825092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.129 [2024-08-13 06:03:25.825258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.129 [2024-08-13 06:03:25.825293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:24.129 [2024-08-13 06:03:25.825324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.129 [2024-08-13 06:03:25.827426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.129 [2024-08-13 06:03:25.827511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.129 [2024-08-13 06:03:25.827615] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.130 [2024-08-13 06:03:25.827684] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.130 pt1 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.130 06:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.389 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:24.389 "name": "raid_bdev1", 00:08:24.389 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:24.389 "strip_size_kb": 0, 00:08:24.389 "state": "configuring", 00:08:24.389 "raid_level": "raid1", 00:08:24.389 "superblock": true, 00:08:24.389 "num_base_bdevs": 2, 00:08:24.389 "num_base_bdevs_discovered": 1, 00:08:24.389 "num_base_bdevs_operational": 2, 00:08:24.389 "base_bdevs_list": [ 00:08:24.389 { 00:08:24.389 "name": "pt1", 00:08:24.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.389 "is_configured": true, 00:08:24.389 "data_offset": 2048, 00:08:24.389 "data_size": 63488 00:08:24.389 }, 00:08:24.389 { 00:08:24.389 "name": null, 00:08:24.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.389 "is_configured": false, 00:08:24.389 "data_offset": 2048, 00:08:24.389 "data_size": 63488 00:08:24.389 } 00:08:24.389 ] 00:08:24.389 }' 00:08:24.389 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:24.389 06:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.963 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:08:24.963 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:08:24.963 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:08:24.963 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.222 [2024-08-13 06:03:26.791450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.222 [2024-08-13 06:03:26.791621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.222 [2024-08-13 06:03:26.791657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.222 [2024-08-13 06:03:26.791668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.222 [2024-08-13 06:03:26.792092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.222 [2024-08-13 06:03:26.792114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.222 [2024-08-13 06:03:26.792190] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:25.222 [2024-08-13 06:03:26.792213] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.222 [2024-08-13 06:03:26.792329] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:25.222 [2024-08-13 06:03:26.792348] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:25.222 [2024-08-13 06:03:26.792605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:25.222 [2024-08-13 06:03:26.792720] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:25.222 [2024-08-13 06:03:26.792729] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:25.222 [2024-08-13 06:03:26.792830] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.222 pt2 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.222 06:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.482 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:25.482 "name": "raid_bdev1", 00:08:25.482 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:25.482 "strip_size_kb": 0, 00:08:25.482 "state": "online", 00:08:25.482 "raid_level": "raid1", 00:08:25.482 "superblock": true, 00:08:25.482 "num_base_bdevs": 2, 00:08:25.482 "num_base_bdevs_discovered": 2, 00:08:25.482 "num_base_bdevs_operational": 2, 00:08:25.482 "base_bdevs_list": [ 00:08:25.482 { 00:08:25.482 "name": "pt1", 00:08:25.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.482 "is_configured": true, 00:08:25.482 "data_offset": 2048, 00:08:25.482 "data_size": 63488 00:08:25.482 }, 00:08:25.482 { 00:08:25.482 "name": "pt2", 00:08:25.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.482 "is_configured": true, 00:08:25.482 "data_offset": 2048, 00:08:25.482 "data_size": 63488 00:08:25.482 } 00:08:25.482 ] 00:08:25.482 }' 00:08:25.482 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:25.482 06:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:26.051 [2024-08-13 06:03:27.750055] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.051 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:26.051 "name": "raid_bdev1", 00:08:26.051 "aliases": [ 00:08:26.051 "6c00a9fa-475b-47dc-97b1-27096620eb4a" 00:08:26.051 ], 00:08:26.051 "product_name": "Raid Volume", 00:08:26.051 "block_size": 512, 00:08:26.051 "num_blocks": 63488, 00:08:26.051 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:26.051 "assigned_rate_limits": { 00:08:26.051 "rw_ios_per_sec": 0, 00:08:26.051 "rw_mbytes_per_sec": 0, 00:08:26.051 "r_mbytes_per_sec": 0, 00:08:26.051 "w_mbytes_per_sec": 0 00:08:26.051 }, 00:08:26.051 "claimed": false, 00:08:26.051 "zoned": false, 00:08:26.051 "supported_io_types": { 00:08:26.051 "read": true, 00:08:26.051 "write": true, 00:08:26.051 "unmap": false, 00:08:26.051 "flush": false, 00:08:26.051 "reset": true, 00:08:26.052 "nvme_admin": false, 00:08:26.052 "nvme_io": false, 00:08:26.052 "nvme_io_md": false, 00:08:26.052 "write_zeroes": true, 00:08:26.052 "zcopy": false, 00:08:26.052 "get_zone_info": false, 00:08:26.052 "zone_management": false, 00:08:26.052 "zone_append": false, 00:08:26.052 "compare": false, 00:08:26.052 "compare_and_write": false, 00:08:26.052 "abort": false, 00:08:26.052 "seek_hole": false, 00:08:26.052 "seek_data": false, 00:08:26.052 "copy": false, 00:08:26.052 "nvme_iov_md": false 00:08:26.052 }, 00:08:26.052 "memory_domains": [ 00:08:26.052 { 00:08:26.052 "dma_device_id": "system", 00:08:26.052 "dma_device_type": 1 00:08:26.052 }, 00:08:26.052 { 00:08:26.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.052 "dma_device_type": 2 00:08:26.052 }, 00:08:26.052 { 00:08:26.052 "dma_device_id": "system", 00:08:26.052 "dma_device_type": 1 00:08:26.052 }, 00:08:26.052 { 00:08:26.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.052 "dma_device_type": 2 00:08:26.052 } 00:08:26.052 ], 00:08:26.052 "driver_specific": { 00:08:26.052 "raid": { 00:08:26.052 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:26.052 "strip_size_kb": 0, 00:08:26.052 "state": "online", 00:08:26.052 "raid_level": "raid1", 00:08:26.052 "superblock": true, 00:08:26.052 "num_base_bdevs": 2, 00:08:26.052 "num_base_bdevs_discovered": 2, 00:08:26.052 "num_base_bdevs_operational": 2, 00:08:26.052 "base_bdevs_list": [ 00:08:26.052 { 00:08:26.052 "name": "pt1", 00:08:26.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.052 "is_configured": true, 00:08:26.052 "data_offset": 2048, 00:08:26.052 "data_size": 63488 00:08:26.052 }, 00:08:26.052 { 00:08:26.052 "name": "pt2", 00:08:26.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.052 "is_configured": true, 00:08:26.052 "data_offset": 2048, 00:08:26.052 "data_size": 63488 00:08:26.052 } 00:08:26.052 ] 00:08:26.052 } 00:08:26.052 } 00:08:26.052 }' 00:08:26.052 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.052 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:26.052 pt2' 00:08:26.052 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:26.052 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:26.052 06:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:26.312 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:26.312 "name": "pt1", 00:08:26.312 "aliases": [ 00:08:26.312 "00000000-0000-0000-0000-000000000001" 00:08:26.312 ], 00:08:26.312 "product_name": "passthru", 00:08:26.312 "block_size": 512, 00:08:26.312 "num_blocks": 65536, 00:08:26.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.312 "assigned_rate_limits": { 00:08:26.312 "rw_ios_per_sec": 0, 00:08:26.312 "rw_mbytes_per_sec": 0, 00:08:26.312 "r_mbytes_per_sec": 0, 00:08:26.312 "w_mbytes_per_sec": 0 00:08:26.312 }, 00:08:26.312 "claimed": true, 00:08:26.312 "claim_type": "exclusive_write", 00:08:26.312 "zoned": false, 00:08:26.312 "supported_io_types": { 00:08:26.312 "read": true, 00:08:26.312 "write": true, 00:08:26.312 "unmap": true, 00:08:26.312 "flush": true, 00:08:26.312 "reset": true, 00:08:26.312 "nvme_admin": false, 00:08:26.312 "nvme_io": false, 00:08:26.312 "nvme_io_md": false, 00:08:26.312 "write_zeroes": true, 00:08:26.312 "zcopy": true, 00:08:26.312 "get_zone_info": false, 00:08:26.312 "zone_management": false, 00:08:26.312 "zone_append": false, 00:08:26.312 "compare": false, 00:08:26.312 "compare_and_write": false, 00:08:26.312 "abort": true, 00:08:26.312 "seek_hole": false, 00:08:26.312 "seek_data": false, 00:08:26.312 "copy": true, 00:08:26.312 "nvme_iov_md": false 00:08:26.312 }, 00:08:26.312 "memory_domains": [ 00:08:26.312 { 00:08:26.312 "dma_device_id": "system", 00:08:26.312 "dma_device_type": 1 00:08:26.312 }, 00:08:26.312 { 00:08:26.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.312 "dma_device_type": 2 00:08:26.312 } 00:08:26.312 ], 00:08:26.312 "driver_specific": { 00:08:26.312 "passthru": { 00:08:26.312 "name": "pt1", 00:08:26.312 "base_bdev_name": "malloc1" 00:08:26.312 } 00:08:26.312 } 00:08:26.312 }' 00:08:26.312 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:26.312 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:26.572 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:26.833 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:26.833 "name": "pt2", 00:08:26.833 "aliases": [ 00:08:26.833 "00000000-0000-0000-0000-000000000002" 00:08:26.833 ], 00:08:26.833 "product_name": "passthru", 00:08:26.833 "block_size": 512, 00:08:26.833 "num_blocks": 65536, 00:08:26.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.833 "assigned_rate_limits": { 00:08:26.833 "rw_ios_per_sec": 0, 00:08:26.833 "rw_mbytes_per_sec": 0, 00:08:26.833 "r_mbytes_per_sec": 0, 00:08:26.833 "w_mbytes_per_sec": 0 00:08:26.833 }, 00:08:26.833 "claimed": true, 00:08:26.833 "claim_type": "exclusive_write", 00:08:26.833 "zoned": false, 00:08:26.833 "supported_io_types": { 00:08:26.833 "read": true, 00:08:26.833 "write": true, 00:08:26.833 "unmap": true, 00:08:26.833 "flush": true, 00:08:26.833 "reset": true, 00:08:26.833 "nvme_admin": false, 00:08:26.833 "nvme_io": false, 00:08:26.833 "nvme_io_md": false, 00:08:26.833 "write_zeroes": true, 00:08:26.833 "zcopy": true, 00:08:26.833 "get_zone_info": false, 00:08:26.833 "zone_management": false, 00:08:26.833 "zone_append": false, 00:08:26.833 "compare": false, 00:08:26.833 "compare_and_write": false, 00:08:26.833 "abort": true, 00:08:26.833 "seek_hole": false, 00:08:26.833 "seek_data": false, 00:08:26.833 "copy": true, 00:08:26.833 "nvme_iov_md": false 00:08:26.833 }, 00:08:26.833 "memory_domains": [ 00:08:26.833 { 00:08:26.833 "dma_device_id": "system", 00:08:26.833 "dma_device_type": 1 00:08:26.833 }, 00:08:26.833 { 00:08:26.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.833 "dma_device_type": 2 00:08:26.833 } 00:08:26.833 ], 00:08:26.833 "driver_specific": { 00:08:26.833 "passthru": { 00:08:26.833 "name": "pt2", 00:08:26.833 "base_bdev_name": "malloc2" 00:08:26.833 } 00:08:26.833 } 00:08:26.833 }' 00:08:26.833 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:27.093 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:27.353 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:27.353 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:08:27.353 06:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:27.353 [2024-08-13 06:03:29.087835] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.353 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 6c00a9fa-475b-47dc-97b1-27096620eb4a '!=' 6c00a9fa-475b-47dc-97b1-27096620eb4a ']' 00:08:27.353 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:08:27.353 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:27.353 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:27.353 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:27.613 [2024-08-13 06:03:29.291284] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.613 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.872 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.872 "name": "raid_bdev1", 00:08:27.872 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:27.872 "strip_size_kb": 0, 00:08:27.872 "state": "online", 00:08:27.872 "raid_level": "raid1", 00:08:27.872 "superblock": true, 00:08:27.872 "num_base_bdevs": 2, 00:08:27.872 "num_base_bdevs_discovered": 1, 00:08:27.872 "num_base_bdevs_operational": 1, 00:08:27.872 "base_bdevs_list": [ 00:08:27.872 { 00:08:27.872 "name": null, 00:08:27.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.872 "is_configured": false, 00:08:27.872 "data_offset": 2048, 00:08:27.872 "data_size": 63488 00:08:27.872 }, 00:08:27.872 { 00:08:27.872 "name": "pt2", 00:08:27.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.872 "is_configured": true, 00:08:27.872 "data_offset": 2048, 00:08:27.872 "data_size": 63488 00:08:27.872 } 00:08:27.872 ] 00:08:27.872 }' 00:08:27.872 06:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.872 06:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.440 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:28.699 [2024-08-13 06:03:30.309499] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.699 [2024-08-13 06:03:30.309609] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.699 [2024-08-13 06:03:30.309707] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.699 [2024-08-13 06:03:30.309767] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.699 [2024-08-13 06:03:30.309801] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:28.699 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:08:28.699 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:08:28.958 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.217 [2024-08-13 06:03:30.916422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.217 [2024-08-13 06:03:30.916610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.217 [2024-08-13 06:03:30.916637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:29.217 [2024-08-13 06:03:30.916650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.217 [2024-08-13 06:03:30.918827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.217 [2024-08-13 06:03:30.918869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.217 [2024-08-13 06:03:30.918949] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.217 [2024-08-13 06:03:30.918988] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.217 [2024-08-13 06:03:30.919081] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:29.217 [2024-08-13 06:03:30.919092] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.217 [2024-08-13 06:03:30.919313] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:29.217 [2024-08-13 06:03:30.919435] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:29.217 [2024-08-13 06:03:30.919444] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:29.217 [2024-08-13 06:03:30.919553] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.217 pt2 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.217 06:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.476 06:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:29.476 "name": "raid_bdev1", 00:08:29.476 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:29.476 "strip_size_kb": 0, 00:08:29.476 "state": "online", 00:08:29.476 "raid_level": "raid1", 00:08:29.476 "superblock": true, 00:08:29.476 "num_base_bdevs": 2, 00:08:29.476 "num_base_bdevs_discovered": 1, 00:08:29.476 "num_base_bdevs_operational": 1, 00:08:29.476 "base_bdevs_list": [ 00:08:29.476 { 00:08:29.476 "name": null, 00:08:29.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.476 "is_configured": false, 00:08:29.476 "data_offset": 2048, 00:08:29.476 "data_size": 63488 00:08:29.476 }, 00:08:29.476 { 00:08:29.476 "name": "pt2", 00:08:29.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.476 "is_configured": true, 00:08:29.476 "data_offset": 2048, 00:08:29.476 "data_size": 63488 00:08:29.476 } 00:08:29.476 ] 00:08:29.476 }' 00:08:29.476 06:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:29.476 06:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.044 06:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:30.303 [2024-08-13 06:03:31.862832] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.303 [2024-08-13 06:03:31.862869] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.303 [2024-08-13 06:03:31.862951] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.303 [2024-08-13 06:03:31.863002] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.303 [2024-08-13 06:03:31.863011] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:30.303 06:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.303 06:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:08:30.303 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:08:30.303 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:08:30.303 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:08:30.303 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.562 [2024-08-13 06:03:32.258166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.562 [2024-08-13 06:03:32.258241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.562 [2024-08-13 06:03:32.258261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:30.562 [2024-08-13 06:03:32.258271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.562 [2024-08-13 06:03:32.260370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.562 [2024-08-13 06:03:32.260408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.562 [2024-08-13 06:03:32.260512] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:30.562 [2024-08-13 06:03:32.260556] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:30.562 [2024-08-13 06:03:32.260678] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:30.562 [2024-08-13 06:03:32.260689] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.562 [2024-08-13 06:03:32.260709] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:30.562 [2024-08-13 06:03:32.260743] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:30.562 [2024-08-13 06:03:32.260820] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:30.562 [2024-08-13 06:03:32.260829] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:30.562 pt1 00:08:30.562 [2024-08-13 06:03:32.261073] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:30.562 [2024-08-13 06:03:32.261193] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:30.562 [2024-08-13 06:03:32.261205] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:30.562 [2024-08-13 06:03:32.261308] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.562 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:08:30.562 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:30.562 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:30.562 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:30.562 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.563 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.821 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:30.821 "name": "raid_bdev1", 00:08:30.821 "uuid": "6c00a9fa-475b-47dc-97b1-27096620eb4a", 00:08:30.821 "strip_size_kb": 0, 00:08:30.821 "state": "online", 00:08:30.821 "raid_level": "raid1", 00:08:30.821 "superblock": true, 00:08:30.821 "num_base_bdevs": 2, 00:08:30.821 "num_base_bdevs_discovered": 1, 00:08:30.821 "num_base_bdevs_operational": 1, 00:08:30.821 "base_bdevs_list": [ 00:08:30.821 { 00:08:30.821 "name": null, 00:08:30.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.821 "is_configured": false, 00:08:30.821 "data_offset": 2048, 00:08:30.821 "data_size": 63488 00:08:30.821 }, 00:08:30.821 { 00:08:30.821 "name": "pt2", 00:08:30.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.821 "is_configured": true, 00:08:30.821 "data_offset": 2048, 00:08:30.821 "data_size": 63488 00:08:30.821 } 00:08:30.821 ] 00:08:30.821 }' 00:08:30.821 06:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:30.821 06:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.389 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:31.389 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:31.648 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:08:31.648 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:31.648 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:08:31.648 [2024-08-13 06:03:33.428528] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 6c00a9fa-475b-47dc-97b1-27096620eb4a '!=' 6c00a9fa-475b-47dc-97b1-27096620eb4a ']' 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 74354 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 74354 ']' 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 74354 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74354 00:08:31.907 killing process with pid 74354 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:31.907 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:31.908 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74354' 00:08:31.908 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 74354 00:08:31.908 [2024-08-13 06:03:33.490221] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.908 [2024-08-13 06:03:33.490318] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.908 [2024-08-13 06:03:33.490368] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.908 [2024-08-13 06:03:33.490379] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:31.908 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 74354 00:08:31.908 [2024-08-13 06:03:33.512945] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.166 06:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:08:32.166 00:08:32.166 real 0m13.622s 00:08:32.166 user 0m25.048s 00:08:32.166 sys 0m2.107s 00:08:32.166 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.166 ************************************ 00:08:32.166 END TEST raid_superblock_test 00:08:32.166 ************************************ 00:08:32.166 06:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.166 06:03:33 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:32.167 06:03:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:32.167 06:03:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.167 06:03:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.167 ************************************ 00:08:32.167 START TEST raid_read_error_test 00:08:32.167 ************************************ 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 read 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.u5dcVq7QQL 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=74840 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 74840 /var/tmp/spdk-raid.sock 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 74840 ']' 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:32.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:32.167 06:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.167 [2024-08-13 06:03:33.921977] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:08:32.167 [2024-08-13 06:03:33.922214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74840 ] 00:08:32.436 [2024-08-13 06:03:34.066675] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.436 [2024-08-13 06:03:34.112468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.436 [2024-08-13 06:03:34.154939] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.436 [2024-08-13 06:03:34.155063] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.047 06:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:33.047 06:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:08:33.047 06:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:33.047 06:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:33.306 BaseBdev1_malloc 00:08:33.306 06:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:33.566 true 00:08:33.566 06:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:33.566 [2024-08-13 06:03:35.346795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:33.566 [2024-08-13 06:03:35.346956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.566 [2024-08-13 06:03:35.346986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:33.566 [2024-08-13 06:03:35.347012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.566 [2024-08-13 06:03:35.349180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.566 [2024-08-13 06:03:35.349223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:33.566 BaseBdev1 00:08:33.827 06:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:33.827 06:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:33.827 BaseBdev2_malloc 00:08:33.827 06:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:34.086 true 00:08:34.086 06:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:34.345 [2024-08-13 06:03:36.018614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:34.345 [2024-08-13 06:03:36.018722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.345 [2024-08-13 06:03:36.018747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:34.346 [2024-08-13 06:03:36.018759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.346 [2024-08-13 06:03:36.020905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.346 [2024-08-13 06:03:36.021001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:34.346 BaseBdev2 00:08:34.346 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:34.605 [2024-08-13 06:03:36.218325] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.605 [2024-08-13 06:03:36.220142] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.605 [2024-08-13 06:03:36.220398] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:34.605 [2024-08-13 06:03:36.220417] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.605 [2024-08-13 06:03:36.220747] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:34.605 [2024-08-13 06:03:36.220913] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:34.605 [2024-08-13 06:03:36.220923] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:34.605 [2024-08-13 06:03:36.221125] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.605 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.865 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.865 "name": "raid_bdev1", 00:08:34.865 "uuid": "2aa060df-b260-4512-a7cb-cb94400b1e6a", 00:08:34.865 "strip_size_kb": 0, 00:08:34.865 "state": "online", 00:08:34.865 "raid_level": "raid1", 00:08:34.865 "superblock": true, 00:08:34.865 "num_base_bdevs": 2, 00:08:34.865 "num_base_bdevs_discovered": 2, 00:08:34.865 "num_base_bdevs_operational": 2, 00:08:34.865 "base_bdevs_list": [ 00:08:34.865 { 00:08:34.865 "name": "BaseBdev1", 00:08:34.865 "uuid": "c16d1ae0-b81f-5f07-af75-ef03f70ef749", 00:08:34.865 "is_configured": true, 00:08:34.865 "data_offset": 2048, 00:08:34.865 "data_size": 63488 00:08:34.865 }, 00:08:34.865 { 00:08:34.865 "name": "BaseBdev2", 00:08:34.865 "uuid": "07c353da-0d56-5ea8-bf33-79a9da5de77a", 00:08:34.865 "is_configured": true, 00:08:34.865 "data_offset": 2048, 00:08:34.865 "data_size": 63488 00:08:34.865 } 00:08:34.865 ] 00:08:34.865 }' 00:08:34.865 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.865 06:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.434 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:35.434 06:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:35.434 [2024-08-13 06:03:37.065239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:36.370 06:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:36.630 "name": "raid_bdev1", 00:08:36.630 "uuid": "2aa060df-b260-4512-a7cb-cb94400b1e6a", 00:08:36.630 "strip_size_kb": 0, 00:08:36.630 "state": "online", 00:08:36.630 "raid_level": "raid1", 00:08:36.630 "superblock": true, 00:08:36.630 "num_base_bdevs": 2, 00:08:36.630 "num_base_bdevs_discovered": 2, 00:08:36.630 "num_base_bdevs_operational": 2, 00:08:36.630 "base_bdevs_list": [ 00:08:36.630 { 00:08:36.630 "name": "BaseBdev1", 00:08:36.630 "uuid": "c16d1ae0-b81f-5f07-af75-ef03f70ef749", 00:08:36.630 "is_configured": true, 00:08:36.630 "data_offset": 2048, 00:08:36.630 "data_size": 63488 00:08:36.630 }, 00:08:36.630 { 00:08:36.630 "name": "BaseBdev2", 00:08:36.630 "uuid": "07c353da-0d56-5ea8-bf33-79a9da5de77a", 00:08:36.630 "is_configured": true, 00:08:36.630 "data_offset": 2048, 00:08:36.630 "data_size": 63488 00:08:36.630 } 00:08:36.630 ] 00:08:36.630 }' 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:36.630 06:03:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 06:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:37.458 [2024-08-13 06:03:39.115192] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.458 [2024-08-13 06:03:39.115322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.458 [2024-08-13 06:03:39.117686] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.458 [2024-08-13 06:03:39.117771] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.458 [2024-08-13 06:03:39.117865] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.458 [2024-08-13 06:03:39.117950] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:37.458 0 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 74840 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 74840 ']' 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 74840 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74840 00:08:37.458 killing process with pid 74840 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74840' 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 74840 00:08:37.458 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 74840 00:08:37.458 [2024-08-13 06:03:39.177448] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.458 [2024-08-13 06:03:39.192877] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.u5dcVq7QQL 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:37.718 ************************************ 00:08:37.718 00:08:37.718 real 0m5.611s 00:08:37.718 user 0m8.701s 00:08:37.718 sys 0m0.779s 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.718 06:03:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.718 END TEST raid_read_error_test 00:08:37.718 ************************************ 00:08:37.718 06:03:39 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:37.718 06:03:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:37.718 06:03:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.718 06:03:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.718 ************************************ 00:08:37.718 START TEST raid_write_error_test 00:08:37.718 ************************************ 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 write 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:37.718 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.0r7YhnlNDf 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=75005 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 75005 /var/tmp/spdk-raid.sock 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 75005 ']' 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:37.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:37.977 06:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.977 [2024-08-13 06:03:39.599275] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:08:37.977 [2024-08-13 06:03:39.599476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75005 ] 00:08:37.978 [2024-08-13 06:03:39.727947] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.236 [2024-08-13 06:03:39.776218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.236 [2024-08-13 06:03:39.819253] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.236 [2024-08-13 06:03:39.819372] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.830 06:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:38.830 06:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:08:38.830 06:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:38.830 06:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.088 BaseBdev1_malloc 00:08:39.088 06:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:39.088 true 00:08:39.088 06:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.347 [2024-08-13 06:03:41.031319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.347 [2024-08-13 06:03:41.031477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.347 [2024-08-13 06:03:41.031518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:39.347 [2024-08-13 06:03:41.031550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.347 [2024-08-13 06:03:41.033758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.347 [2024-08-13 06:03:41.033842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.347 BaseBdev1 00:08:39.347 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:39.347 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.606 BaseBdev2_malloc 00:08:39.606 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:39.865 true 00:08:39.865 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.865 [2024-08-13 06:03:41.631184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.865 [2024-08-13 06:03:41.631327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.865 [2024-08-13 06:03:41.631372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:39.865 [2024-08-13 06:03:41.631404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.865 [2024-08-13 06:03:41.633699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.865 [2024-08-13 06:03:41.633780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.865 BaseBdev2 00:08:39.865 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:40.124 [2024-08-13 06:03:41.850860] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.124 [2024-08-13 06:03:41.852904] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.124 [2024-08-13 06:03:41.853194] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:40.124 [2024-08-13 06:03:41.853254] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.124 [2024-08-13 06:03:41.853603] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:40.124 [2024-08-13 06:03:41.853826] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:40.124 [2024-08-13 06:03:41.853869] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:40.124 [2024-08-13 06:03:41.854104] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.124 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:40.124 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:40.124 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:40.124 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.125 06:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.384 06:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:40.384 "name": "raid_bdev1", 00:08:40.384 "uuid": "fadd8ea4-438f-4fbb-a630-e2751d16187e", 00:08:40.384 "strip_size_kb": 0, 00:08:40.384 "state": "online", 00:08:40.384 "raid_level": "raid1", 00:08:40.384 "superblock": true, 00:08:40.384 "num_base_bdevs": 2, 00:08:40.384 "num_base_bdevs_discovered": 2, 00:08:40.384 "num_base_bdevs_operational": 2, 00:08:40.384 "base_bdevs_list": [ 00:08:40.384 { 00:08:40.384 "name": "BaseBdev1", 00:08:40.384 "uuid": "62534898-20e6-5100-ad97-08823f41cda0", 00:08:40.384 "is_configured": true, 00:08:40.384 "data_offset": 2048, 00:08:40.384 "data_size": 63488 00:08:40.384 }, 00:08:40.384 { 00:08:40.384 "name": "BaseBdev2", 00:08:40.384 "uuid": "8ae32f2e-693a-59bc-9826-89519274b936", 00:08:40.384 "is_configured": true, 00:08:40.384 "data_offset": 2048, 00:08:40.384 "data_size": 63488 00:08:40.384 } 00:08:40.384 ] 00:08:40.384 }' 00:08:40.384 06:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:40.384 06:03:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.952 06:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:40.952 06:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:40.952 [2024-08-13 06:03:42.741776] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:41.890 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:42.147 [2024-08-13 06:03:43.875721] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:42.147 [2024-08-13 06:03:43.875783] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.147 [2024-08-13 06:03:43.876018] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.147 06:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.405 06:03:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:42.405 "name": "raid_bdev1", 00:08:42.405 "uuid": "fadd8ea4-438f-4fbb-a630-e2751d16187e", 00:08:42.405 "strip_size_kb": 0, 00:08:42.405 "state": "online", 00:08:42.405 "raid_level": "raid1", 00:08:42.405 "superblock": true, 00:08:42.405 "num_base_bdevs": 2, 00:08:42.405 "num_base_bdevs_discovered": 1, 00:08:42.405 "num_base_bdevs_operational": 1, 00:08:42.405 "base_bdevs_list": [ 00:08:42.405 { 00:08:42.405 "name": null, 00:08:42.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.405 "is_configured": false, 00:08:42.405 "data_offset": 2048, 00:08:42.405 "data_size": 63488 00:08:42.405 }, 00:08:42.405 { 00:08:42.405 "name": "BaseBdev2", 00:08:42.405 "uuid": "8ae32f2e-693a-59bc-9826-89519274b936", 00:08:42.405 "is_configured": true, 00:08:42.405 "data_offset": 2048, 00:08:42.405 "data_size": 63488 00:08:42.405 } 00:08:42.405 ] 00:08:42.405 }' 00:08:42.405 06:03:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:42.405 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.971 06:03:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:43.230 [2024-08-13 06:03:44.870067] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.230 [2024-08-13 06:03:44.870192] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.230 [2024-08-13 06:03:44.872524] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.230 [2024-08-13 06:03:44.872605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.230 [2024-08-13 06:03:44.872685] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.230 [2024-08-13 06:03:44.872762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:43.230 0 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 75005 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 75005 ']' 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 75005 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75005 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75005' 00:08:43.230 killing process with pid 75005 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 75005 00:08:43.230 [2024-08-13 06:03:44.946001] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.230 06:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 75005 00:08:43.230 [2024-08-13 06:03:44.961761] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.0r7YhnlNDf 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:43.489 ************************************ 00:08:43.489 END TEST raid_write_error_test 00:08:43.489 ************************************ 00:08:43.489 00:08:43.489 real 0m5.710s 00:08:43.489 user 0m8.841s 00:08:43.489 sys 0m0.826s 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.489 06:03:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.489 06:03:45 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:08:43.489 06:03:45 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:08:43.489 06:03:45 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:43.489 06:03:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:43.489 06:03:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.489 06:03:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.747 ************************************ 00:08:43.747 START TEST raid_state_function_test 00:08:43.747 ************************************ 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:43.747 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=75169 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 75169' 00:08:43.748 Process raid pid: 75169 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 75169 /var/tmp/spdk-raid.sock 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 75169 ']' 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:43.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:43.748 06:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 [2024-08-13 06:03:45.383667] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:08:43.748 [2024-08-13 06:03:45.384437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.748 [2024-08-13 06:03:45.531535] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.006 [2024-08-13 06:03:45.578559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.006 [2024-08-13 06:03:45.622212] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.006 [2024-08-13 06:03:45.622319] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.593 06:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:44.593 06:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:08:44.593 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:44.854 [2024-08-13 06:03:46.438234] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.854 [2024-08-13 06:03:46.438382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.854 [2024-08-13 06:03:46.438423] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.854 [2024-08-13 06:03:46.438449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.854 [2024-08-13 06:03:46.438476] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.854 [2024-08-13 06:03:46.438498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.854 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.112 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:45.112 "name": "Existed_Raid", 00:08:45.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.112 "strip_size_kb": 64, 00:08:45.112 "state": "configuring", 00:08:45.112 "raid_level": "raid0", 00:08:45.112 "superblock": false, 00:08:45.112 "num_base_bdevs": 3, 00:08:45.112 "num_base_bdevs_discovered": 0, 00:08:45.112 "num_base_bdevs_operational": 3, 00:08:45.112 "base_bdevs_list": [ 00:08:45.112 { 00:08:45.112 "name": "BaseBdev1", 00:08:45.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.112 "is_configured": false, 00:08:45.112 "data_offset": 0, 00:08:45.112 "data_size": 0 00:08:45.112 }, 00:08:45.112 { 00:08:45.112 "name": "BaseBdev2", 00:08:45.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.112 "is_configured": false, 00:08:45.112 "data_offset": 0, 00:08:45.112 "data_size": 0 00:08:45.112 }, 00:08:45.112 { 00:08:45.112 "name": "BaseBdev3", 00:08:45.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.112 "is_configured": false, 00:08:45.112 "data_offset": 0, 00:08:45.112 "data_size": 0 00:08:45.112 } 00:08:45.112 ] 00:08:45.112 }' 00:08:45.112 06:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:45.112 06:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.678 06:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:45.937 [2024-08-13 06:03:47.512288] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.937 [2024-08-13 06:03:47.512409] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:45.937 06:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:46.195 [2024-08-13 06:03:47.731961] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.195 [2024-08-13 06:03:47.732098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.195 [2024-08-13 06:03:47.732133] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.195 [2024-08-13 06:03:47.732160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.195 [2024-08-13 06:03:47.732185] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.195 [2024-08-13 06:03:47.732204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.195 06:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.195 [2024-08-13 06:03:47.972918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.195 BaseBdev1 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:46.454 06:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:46.454 06:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.712 [ 00:08:46.712 { 00:08:46.712 "name": "BaseBdev1", 00:08:46.712 "aliases": [ 00:08:46.712 "d40b2c10-5eac-437b-b91a-327000c2583b" 00:08:46.712 ], 00:08:46.712 "product_name": "Malloc disk", 00:08:46.712 "block_size": 512, 00:08:46.712 "num_blocks": 65536, 00:08:46.712 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:46.712 "assigned_rate_limits": { 00:08:46.712 "rw_ios_per_sec": 0, 00:08:46.712 "rw_mbytes_per_sec": 0, 00:08:46.712 "r_mbytes_per_sec": 0, 00:08:46.712 "w_mbytes_per_sec": 0 00:08:46.712 }, 00:08:46.712 "claimed": true, 00:08:46.712 "claim_type": "exclusive_write", 00:08:46.712 "zoned": false, 00:08:46.712 "supported_io_types": { 00:08:46.712 "read": true, 00:08:46.712 "write": true, 00:08:46.712 "unmap": true, 00:08:46.712 "flush": true, 00:08:46.712 "reset": true, 00:08:46.712 "nvme_admin": false, 00:08:46.712 "nvme_io": false, 00:08:46.712 "nvme_io_md": false, 00:08:46.712 "write_zeroes": true, 00:08:46.712 "zcopy": true, 00:08:46.712 "get_zone_info": false, 00:08:46.712 "zone_management": false, 00:08:46.712 "zone_append": false, 00:08:46.712 "compare": false, 00:08:46.712 "compare_and_write": false, 00:08:46.712 "abort": true, 00:08:46.712 "seek_hole": false, 00:08:46.712 "seek_data": false, 00:08:46.712 "copy": true, 00:08:46.712 "nvme_iov_md": false 00:08:46.713 }, 00:08:46.713 "memory_domains": [ 00:08:46.713 { 00:08:46.713 "dma_device_id": "system", 00:08:46.713 "dma_device_type": 1 00:08:46.713 }, 00:08:46.713 { 00:08:46.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.713 "dma_device_type": 2 00:08:46.713 } 00:08:46.713 ], 00:08:46.713 "driver_specific": {} 00:08:46.713 } 00:08:46.713 ] 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.713 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.971 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.971 "name": "Existed_Raid", 00:08:46.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.971 "strip_size_kb": 64, 00:08:46.971 "state": "configuring", 00:08:46.971 "raid_level": "raid0", 00:08:46.971 "superblock": false, 00:08:46.971 "num_base_bdevs": 3, 00:08:46.971 "num_base_bdevs_discovered": 1, 00:08:46.971 "num_base_bdevs_operational": 3, 00:08:46.971 "base_bdevs_list": [ 00:08:46.971 { 00:08:46.971 "name": "BaseBdev1", 00:08:46.971 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:46.971 "is_configured": true, 00:08:46.971 "data_offset": 0, 00:08:46.971 "data_size": 65536 00:08:46.971 }, 00:08:46.971 { 00:08:46.971 "name": "BaseBdev2", 00:08:46.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.971 "is_configured": false, 00:08:46.971 "data_offset": 0, 00:08:46.971 "data_size": 0 00:08:46.971 }, 00:08:46.971 { 00:08:46.972 "name": "BaseBdev3", 00:08:46.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.972 "is_configured": false, 00:08:46.972 "data_offset": 0, 00:08:46.972 "data_size": 0 00:08:46.972 } 00:08:46.972 ] 00:08:46.972 }' 00:08:46.972 06:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.972 06:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.539 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:47.798 [2024-08-13 06:03:49.410503] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.798 [2024-08-13 06:03:49.410567] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:47.798 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:48.057 [2024-08-13 06:03:49.630222] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.057 [2024-08-13 06:03:49.632113] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.057 [2024-08-13 06:03:49.632218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.057 [2024-08-13 06:03:49.632235] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.057 [2024-08-13 06:03:49.632243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.057 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.317 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:48.317 "name": "Existed_Raid", 00:08:48.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.317 "strip_size_kb": 64, 00:08:48.317 "state": "configuring", 00:08:48.317 "raid_level": "raid0", 00:08:48.317 "superblock": false, 00:08:48.317 "num_base_bdevs": 3, 00:08:48.317 "num_base_bdevs_discovered": 1, 00:08:48.317 "num_base_bdevs_operational": 3, 00:08:48.317 "base_bdevs_list": [ 00:08:48.317 { 00:08:48.317 "name": "BaseBdev1", 00:08:48.317 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:48.317 "is_configured": true, 00:08:48.317 "data_offset": 0, 00:08:48.317 "data_size": 65536 00:08:48.317 }, 00:08:48.317 { 00:08:48.317 "name": "BaseBdev2", 00:08:48.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.317 "is_configured": false, 00:08:48.317 "data_offset": 0, 00:08:48.317 "data_size": 0 00:08:48.317 }, 00:08:48.317 { 00:08:48.317 "name": "BaseBdev3", 00:08:48.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.317 "is_configured": false, 00:08:48.317 "data_offset": 0, 00:08:48.317 "data_size": 0 00:08:48.317 } 00:08:48.317 ] 00:08:48.317 }' 00:08:48.317 06:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:48.317 06:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.886 [2024-08-13 06:03:50.595572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.886 BaseBdev2 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:48.886 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:49.146 06:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.405 [ 00:08:49.405 { 00:08:49.405 "name": "BaseBdev2", 00:08:49.405 "aliases": [ 00:08:49.405 "fee5d3c0-3c8f-463e-bc53-901323f49e87" 00:08:49.405 ], 00:08:49.405 "product_name": "Malloc disk", 00:08:49.405 "block_size": 512, 00:08:49.405 "num_blocks": 65536, 00:08:49.405 "uuid": "fee5d3c0-3c8f-463e-bc53-901323f49e87", 00:08:49.405 "assigned_rate_limits": { 00:08:49.405 "rw_ios_per_sec": 0, 00:08:49.405 "rw_mbytes_per_sec": 0, 00:08:49.405 "r_mbytes_per_sec": 0, 00:08:49.405 "w_mbytes_per_sec": 0 00:08:49.405 }, 00:08:49.405 "claimed": true, 00:08:49.405 "claim_type": "exclusive_write", 00:08:49.405 "zoned": false, 00:08:49.405 "supported_io_types": { 00:08:49.405 "read": true, 00:08:49.405 "write": true, 00:08:49.406 "unmap": true, 00:08:49.406 "flush": true, 00:08:49.406 "reset": true, 00:08:49.406 "nvme_admin": false, 00:08:49.406 "nvme_io": false, 00:08:49.406 "nvme_io_md": false, 00:08:49.406 "write_zeroes": true, 00:08:49.406 "zcopy": true, 00:08:49.406 "get_zone_info": false, 00:08:49.406 "zone_management": false, 00:08:49.406 "zone_append": false, 00:08:49.406 "compare": false, 00:08:49.406 "compare_and_write": false, 00:08:49.406 "abort": true, 00:08:49.406 "seek_hole": false, 00:08:49.406 "seek_data": false, 00:08:49.406 "copy": true, 00:08:49.406 "nvme_iov_md": false 00:08:49.406 }, 00:08:49.406 "memory_domains": [ 00:08:49.406 { 00:08:49.406 "dma_device_id": "system", 00:08:49.406 "dma_device_type": 1 00:08:49.406 }, 00:08:49.406 { 00:08:49.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.406 "dma_device_type": 2 00:08:49.406 } 00:08:49.406 ], 00:08:49.406 "driver_specific": {} 00:08:49.406 } 00:08:49.406 ] 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.406 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.666 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.666 "name": "Existed_Raid", 00:08:49.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.666 "strip_size_kb": 64, 00:08:49.666 "state": "configuring", 00:08:49.666 "raid_level": "raid0", 00:08:49.666 "superblock": false, 00:08:49.666 "num_base_bdevs": 3, 00:08:49.666 "num_base_bdevs_discovered": 2, 00:08:49.666 "num_base_bdevs_operational": 3, 00:08:49.666 "base_bdevs_list": [ 00:08:49.666 { 00:08:49.666 "name": "BaseBdev1", 00:08:49.666 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:49.666 "is_configured": true, 00:08:49.666 "data_offset": 0, 00:08:49.666 "data_size": 65536 00:08:49.666 }, 00:08:49.666 { 00:08:49.666 "name": "BaseBdev2", 00:08:49.666 "uuid": "fee5d3c0-3c8f-463e-bc53-901323f49e87", 00:08:49.666 "is_configured": true, 00:08:49.666 "data_offset": 0, 00:08:49.666 "data_size": 65536 00:08:49.666 }, 00:08:49.666 { 00:08:49.666 "name": "BaseBdev3", 00:08:49.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.666 "is_configured": false, 00:08:49.666 "data_offset": 0, 00:08:49.666 "data_size": 0 00:08:49.666 } 00:08:49.666 ] 00:08:49.666 }' 00:08:49.666 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.666 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.234 [2024-08-13 06:03:51.896503] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.234 [2024-08-13 06:03:51.896628] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:50.234 [2024-08-13 06:03:51.896654] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:50.234 [2024-08-13 06:03:51.897007] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:50.234 [2024-08-13 06:03:51.897219] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:50.234 [2024-08-13 06:03:51.897268] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:50.234 [2024-08-13 06:03:51.897516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.234 BaseBdev3 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:50.234 06:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:50.493 06:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.752 [ 00:08:50.752 { 00:08:50.752 "name": "BaseBdev3", 00:08:50.752 "aliases": [ 00:08:50.752 "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a" 00:08:50.752 ], 00:08:50.753 "product_name": "Malloc disk", 00:08:50.753 "block_size": 512, 00:08:50.753 "num_blocks": 65536, 00:08:50.753 "uuid": "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a", 00:08:50.753 "assigned_rate_limits": { 00:08:50.753 "rw_ios_per_sec": 0, 00:08:50.753 "rw_mbytes_per_sec": 0, 00:08:50.753 "r_mbytes_per_sec": 0, 00:08:50.753 "w_mbytes_per_sec": 0 00:08:50.753 }, 00:08:50.753 "claimed": true, 00:08:50.753 "claim_type": "exclusive_write", 00:08:50.753 "zoned": false, 00:08:50.753 "supported_io_types": { 00:08:50.753 "read": true, 00:08:50.753 "write": true, 00:08:50.753 "unmap": true, 00:08:50.753 "flush": true, 00:08:50.753 "reset": true, 00:08:50.753 "nvme_admin": false, 00:08:50.753 "nvme_io": false, 00:08:50.753 "nvme_io_md": false, 00:08:50.753 "write_zeroes": true, 00:08:50.753 "zcopy": true, 00:08:50.753 "get_zone_info": false, 00:08:50.753 "zone_management": false, 00:08:50.753 "zone_append": false, 00:08:50.753 "compare": false, 00:08:50.753 "compare_and_write": false, 00:08:50.753 "abort": true, 00:08:50.753 "seek_hole": false, 00:08:50.753 "seek_data": false, 00:08:50.753 "copy": true, 00:08:50.753 "nvme_iov_md": false 00:08:50.753 }, 00:08:50.753 "memory_domains": [ 00:08:50.753 { 00:08:50.753 "dma_device_id": "system", 00:08:50.753 "dma_device_type": 1 00:08:50.753 }, 00:08:50.753 { 00:08:50.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.753 "dma_device_type": 2 00:08:50.753 } 00:08:50.753 ], 00:08:50.753 "driver_specific": {} 00:08:50.753 } 00:08:50.753 ] 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:50.753 "name": "Existed_Raid", 00:08:50.753 "uuid": "a975e35b-aef7-4213-8073-662c192a3ef0", 00:08:50.753 "strip_size_kb": 64, 00:08:50.753 "state": "online", 00:08:50.753 "raid_level": "raid0", 00:08:50.753 "superblock": false, 00:08:50.753 "num_base_bdevs": 3, 00:08:50.753 "num_base_bdevs_discovered": 3, 00:08:50.753 "num_base_bdevs_operational": 3, 00:08:50.753 "base_bdevs_list": [ 00:08:50.753 { 00:08:50.753 "name": "BaseBdev1", 00:08:50.753 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:50.753 "is_configured": true, 00:08:50.753 "data_offset": 0, 00:08:50.753 "data_size": 65536 00:08:50.753 }, 00:08:50.753 { 00:08:50.753 "name": "BaseBdev2", 00:08:50.753 "uuid": "fee5d3c0-3c8f-463e-bc53-901323f49e87", 00:08:50.753 "is_configured": true, 00:08:50.753 "data_offset": 0, 00:08:50.753 "data_size": 65536 00:08:50.753 }, 00:08:50.753 { 00:08:50.753 "name": "BaseBdev3", 00:08:50.753 "uuid": "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a", 00:08:50.753 "is_configured": true, 00:08:50.753 "data_offset": 0, 00:08:50.753 "data_size": 65536 00:08:50.753 } 00:08:50.753 ] 00:08:50.753 }' 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:50.753 06:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:51.323 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:51.584 [2024-08-13 06:03:53.294471] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.584 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:51.584 "name": "Existed_Raid", 00:08:51.584 "aliases": [ 00:08:51.584 "a975e35b-aef7-4213-8073-662c192a3ef0" 00:08:51.584 ], 00:08:51.584 "product_name": "Raid Volume", 00:08:51.584 "block_size": 512, 00:08:51.584 "num_blocks": 196608, 00:08:51.584 "uuid": "a975e35b-aef7-4213-8073-662c192a3ef0", 00:08:51.584 "assigned_rate_limits": { 00:08:51.584 "rw_ios_per_sec": 0, 00:08:51.584 "rw_mbytes_per_sec": 0, 00:08:51.584 "r_mbytes_per_sec": 0, 00:08:51.584 "w_mbytes_per_sec": 0 00:08:51.584 }, 00:08:51.584 "claimed": false, 00:08:51.584 "zoned": false, 00:08:51.584 "supported_io_types": { 00:08:51.584 "read": true, 00:08:51.584 "write": true, 00:08:51.584 "unmap": true, 00:08:51.584 "flush": true, 00:08:51.584 "reset": true, 00:08:51.584 "nvme_admin": false, 00:08:51.584 "nvme_io": false, 00:08:51.584 "nvme_io_md": false, 00:08:51.584 "write_zeroes": true, 00:08:51.584 "zcopy": false, 00:08:51.584 "get_zone_info": false, 00:08:51.584 "zone_management": false, 00:08:51.584 "zone_append": false, 00:08:51.584 "compare": false, 00:08:51.584 "compare_and_write": false, 00:08:51.584 "abort": false, 00:08:51.584 "seek_hole": false, 00:08:51.584 "seek_data": false, 00:08:51.584 "copy": false, 00:08:51.584 "nvme_iov_md": false 00:08:51.584 }, 00:08:51.584 "memory_domains": [ 00:08:51.584 { 00:08:51.584 "dma_device_id": "system", 00:08:51.584 "dma_device_type": 1 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.584 "dma_device_type": 2 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "dma_device_id": "system", 00:08:51.584 "dma_device_type": 1 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.584 "dma_device_type": 2 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "dma_device_id": "system", 00:08:51.584 "dma_device_type": 1 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.584 "dma_device_type": 2 00:08:51.584 } 00:08:51.584 ], 00:08:51.584 "driver_specific": { 00:08:51.584 "raid": { 00:08:51.584 "uuid": "a975e35b-aef7-4213-8073-662c192a3ef0", 00:08:51.584 "strip_size_kb": 64, 00:08:51.584 "state": "online", 00:08:51.584 "raid_level": "raid0", 00:08:51.584 "superblock": false, 00:08:51.584 "num_base_bdevs": 3, 00:08:51.584 "num_base_bdevs_discovered": 3, 00:08:51.584 "num_base_bdevs_operational": 3, 00:08:51.584 "base_bdevs_list": [ 00:08:51.584 { 00:08:51.584 "name": "BaseBdev1", 00:08:51.584 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:51.584 "is_configured": true, 00:08:51.584 "data_offset": 0, 00:08:51.584 "data_size": 65536 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "name": "BaseBdev2", 00:08:51.584 "uuid": "fee5d3c0-3c8f-463e-bc53-901323f49e87", 00:08:51.584 "is_configured": true, 00:08:51.584 "data_offset": 0, 00:08:51.584 "data_size": 65536 00:08:51.584 }, 00:08:51.584 { 00:08:51.584 "name": "BaseBdev3", 00:08:51.584 "uuid": "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a", 00:08:51.584 "is_configured": true, 00:08:51.584 "data_offset": 0, 00:08:51.584 "data_size": 65536 00:08:51.584 } 00:08:51.584 ] 00:08:51.584 } 00:08:51.584 } 00:08:51.584 }' 00:08:51.584 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.584 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:51.584 BaseBdev2 00:08:51.584 BaseBdev3' 00:08:51.584 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:51.584 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:51.584 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:51.845 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:51.845 "name": "BaseBdev1", 00:08:51.845 "aliases": [ 00:08:51.845 "d40b2c10-5eac-437b-b91a-327000c2583b" 00:08:51.845 ], 00:08:51.845 "product_name": "Malloc disk", 00:08:51.845 "block_size": 512, 00:08:51.845 "num_blocks": 65536, 00:08:51.845 "uuid": "d40b2c10-5eac-437b-b91a-327000c2583b", 00:08:51.845 "assigned_rate_limits": { 00:08:51.845 "rw_ios_per_sec": 0, 00:08:51.845 "rw_mbytes_per_sec": 0, 00:08:51.845 "r_mbytes_per_sec": 0, 00:08:51.845 "w_mbytes_per_sec": 0 00:08:51.845 }, 00:08:51.845 "claimed": true, 00:08:51.845 "claim_type": "exclusive_write", 00:08:51.845 "zoned": false, 00:08:51.845 "supported_io_types": { 00:08:51.845 "read": true, 00:08:51.845 "write": true, 00:08:51.845 "unmap": true, 00:08:51.845 "flush": true, 00:08:51.845 "reset": true, 00:08:51.845 "nvme_admin": false, 00:08:51.845 "nvme_io": false, 00:08:51.845 "nvme_io_md": false, 00:08:51.845 "write_zeroes": true, 00:08:51.845 "zcopy": true, 00:08:51.845 "get_zone_info": false, 00:08:51.845 "zone_management": false, 00:08:51.845 "zone_append": false, 00:08:51.845 "compare": false, 00:08:51.845 "compare_and_write": false, 00:08:51.845 "abort": true, 00:08:51.845 "seek_hole": false, 00:08:51.845 "seek_data": false, 00:08:51.845 "copy": true, 00:08:51.845 "nvme_iov_md": false 00:08:51.845 }, 00:08:51.845 "memory_domains": [ 00:08:51.845 { 00:08:51.845 "dma_device_id": "system", 00:08:51.845 "dma_device_type": 1 00:08:51.845 }, 00:08:51.845 { 00:08:51.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.845 "dma_device_type": 2 00:08:51.845 } 00:08:51.845 ], 00:08:51.845 "driver_specific": {} 00:08:51.845 }' 00:08:51.845 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.845 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.106 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.365 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:52.365 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:52.365 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:52.365 06:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:52.365 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:52.365 "name": "BaseBdev2", 00:08:52.365 "aliases": [ 00:08:52.365 "fee5d3c0-3c8f-463e-bc53-901323f49e87" 00:08:52.365 ], 00:08:52.365 "product_name": "Malloc disk", 00:08:52.365 "block_size": 512, 00:08:52.365 "num_blocks": 65536, 00:08:52.365 "uuid": "fee5d3c0-3c8f-463e-bc53-901323f49e87", 00:08:52.365 "assigned_rate_limits": { 00:08:52.365 "rw_ios_per_sec": 0, 00:08:52.365 "rw_mbytes_per_sec": 0, 00:08:52.365 "r_mbytes_per_sec": 0, 00:08:52.365 "w_mbytes_per_sec": 0 00:08:52.365 }, 00:08:52.365 "claimed": true, 00:08:52.365 "claim_type": "exclusive_write", 00:08:52.365 "zoned": false, 00:08:52.365 "supported_io_types": { 00:08:52.365 "read": true, 00:08:52.365 "write": true, 00:08:52.365 "unmap": true, 00:08:52.365 "flush": true, 00:08:52.365 "reset": true, 00:08:52.365 "nvme_admin": false, 00:08:52.365 "nvme_io": false, 00:08:52.365 "nvme_io_md": false, 00:08:52.365 "write_zeroes": true, 00:08:52.365 "zcopy": true, 00:08:52.365 "get_zone_info": false, 00:08:52.365 "zone_management": false, 00:08:52.365 "zone_append": false, 00:08:52.365 "compare": false, 00:08:52.365 "compare_and_write": false, 00:08:52.365 "abort": true, 00:08:52.365 "seek_hole": false, 00:08:52.365 "seek_data": false, 00:08:52.365 "copy": true, 00:08:52.365 "nvme_iov_md": false 00:08:52.365 }, 00:08:52.365 "memory_domains": [ 00:08:52.365 { 00:08:52.365 "dma_device_id": "system", 00:08:52.365 "dma_device_type": 1 00:08:52.365 }, 00:08:52.365 { 00:08:52.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.365 "dma_device_type": 2 00:08:52.365 } 00:08:52.365 ], 00:08:52.365 "driver_specific": {} 00:08:52.365 }' 00:08:52.365 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.365 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.625 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.884 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:52.884 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:52.884 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:08:52.884 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:52.884 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:52.884 "name": "BaseBdev3", 00:08:52.884 "aliases": [ 00:08:52.884 "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a" 00:08:52.884 ], 00:08:52.884 "product_name": "Malloc disk", 00:08:52.884 "block_size": 512, 00:08:52.884 "num_blocks": 65536, 00:08:52.884 "uuid": "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a", 00:08:52.884 "assigned_rate_limits": { 00:08:52.884 "rw_ios_per_sec": 0, 00:08:52.884 "rw_mbytes_per_sec": 0, 00:08:52.884 "r_mbytes_per_sec": 0, 00:08:52.884 "w_mbytes_per_sec": 0 00:08:52.884 }, 00:08:52.884 "claimed": true, 00:08:52.884 "claim_type": "exclusive_write", 00:08:52.884 "zoned": false, 00:08:52.884 "supported_io_types": { 00:08:52.884 "read": true, 00:08:52.884 "write": true, 00:08:52.884 "unmap": true, 00:08:52.884 "flush": true, 00:08:52.884 "reset": true, 00:08:52.884 "nvme_admin": false, 00:08:52.884 "nvme_io": false, 00:08:52.885 "nvme_io_md": false, 00:08:52.885 "write_zeroes": true, 00:08:52.885 "zcopy": true, 00:08:52.885 "get_zone_info": false, 00:08:52.885 "zone_management": false, 00:08:52.885 "zone_append": false, 00:08:52.885 "compare": false, 00:08:52.885 "compare_and_write": false, 00:08:52.885 "abort": true, 00:08:52.885 "seek_hole": false, 00:08:52.885 "seek_data": false, 00:08:52.885 "copy": true, 00:08:52.885 "nvme_iov_md": false 00:08:52.885 }, 00:08:52.885 "memory_domains": [ 00:08:52.885 { 00:08:52.885 "dma_device_id": "system", 00:08:52.885 "dma_device_type": 1 00:08:52.885 }, 00:08:52.885 { 00:08:52.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.885 "dma_device_type": 2 00:08:52.885 } 00:08:52.885 ], 00:08:52.885 "driver_specific": {} 00:08:52.885 }' 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:53.145 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:53.405 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:53.405 06:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:53.405 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:53.405 [2024-08-13 06:03:55.191111] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.405 [2024-08-13 06:03:55.191229] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.405 [2024-08-13 06:03:55.191316] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:53.665 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.666 "name": "Existed_Raid", 00:08:53.666 "uuid": "a975e35b-aef7-4213-8073-662c192a3ef0", 00:08:53.666 "strip_size_kb": 64, 00:08:53.666 "state": "offline", 00:08:53.666 "raid_level": "raid0", 00:08:53.666 "superblock": false, 00:08:53.666 "num_base_bdevs": 3, 00:08:53.666 "num_base_bdevs_discovered": 2, 00:08:53.666 "num_base_bdevs_operational": 2, 00:08:53.666 "base_bdevs_list": [ 00:08:53.666 { 00:08:53.666 "name": null, 00:08:53.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.666 "is_configured": false, 00:08:53.666 "data_offset": 0, 00:08:53.666 "data_size": 65536 00:08:53.666 }, 00:08:53.666 { 00:08:53.666 "name": "BaseBdev2", 00:08:53.666 "uuid": "fee5d3c0-3c8f-463e-bc53-901323f49e87", 00:08:53.666 "is_configured": true, 00:08:53.666 "data_offset": 0, 00:08:53.666 "data_size": 65536 00:08:53.666 }, 00:08:53.666 { 00:08:53.666 "name": "BaseBdev3", 00:08:53.666 "uuid": "6d35c5ce-b0af-4f4d-b2e8-74285dfe083a", 00:08:53.666 "is_configured": true, 00:08:53.666 "data_offset": 0, 00:08:53.666 "data_size": 65536 00:08:53.666 } 00:08:53.666 ] 00:08:53.666 }' 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.666 06:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.604 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:54.604 [2024-08-13 06:03:56.388755] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.864 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:55.123 [2024-08-13 06:03:56.827481] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.123 [2024-08-13 06:03:56.827543] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:55.123 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:55.123 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:55.123 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.123 06:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:55.383 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:55.383 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:55.383 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:08:55.383 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:08:55.383 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:55.383 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.643 BaseBdev2 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:55.643 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:55.902 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.902 [ 00:08:55.902 { 00:08:55.902 "name": "BaseBdev2", 00:08:55.902 "aliases": [ 00:08:55.902 "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa" 00:08:55.902 ], 00:08:55.902 "product_name": "Malloc disk", 00:08:55.902 "block_size": 512, 00:08:55.902 "num_blocks": 65536, 00:08:55.902 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:08:55.902 "assigned_rate_limits": { 00:08:55.902 "rw_ios_per_sec": 0, 00:08:55.902 "rw_mbytes_per_sec": 0, 00:08:55.902 "r_mbytes_per_sec": 0, 00:08:55.902 "w_mbytes_per_sec": 0 00:08:55.902 }, 00:08:55.902 "claimed": false, 00:08:55.902 "zoned": false, 00:08:55.902 "supported_io_types": { 00:08:55.902 "read": true, 00:08:55.902 "write": true, 00:08:55.902 "unmap": true, 00:08:55.902 "flush": true, 00:08:55.902 "reset": true, 00:08:55.902 "nvme_admin": false, 00:08:55.902 "nvme_io": false, 00:08:55.902 "nvme_io_md": false, 00:08:55.902 "write_zeroes": true, 00:08:55.902 "zcopy": true, 00:08:55.902 "get_zone_info": false, 00:08:55.902 "zone_management": false, 00:08:55.903 "zone_append": false, 00:08:55.903 "compare": false, 00:08:55.903 "compare_and_write": false, 00:08:55.903 "abort": true, 00:08:55.903 "seek_hole": false, 00:08:55.903 "seek_data": false, 00:08:55.903 "copy": true, 00:08:55.903 "nvme_iov_md": false 00:08:55.903 }, 00:08:55.903 "memory_domains": [ 00:08:55.903 { 00:08:55.903 "dma_device_id": "system", 00:08:55.903 "dma_device_type": 1 00:08:55.903 }, 00:08:55.903 { 00:08:55.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.903 "dma_device_type": 2 00:08:55.903 } 00:08:55.903 ], 00:08:55.903 "driver_specific": {} 00:08:55.903 } 00:08:55.903 ] 00:08:55.903 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:55.903 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:55.903 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:55.903 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.163 BaseBdev3 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:56.163 06:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:56.422 06:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.682 [ 00:08:56.682 { 00:08:56.682 "name": "BaseBdev3", 00:08:56.682 "aliases": [ 00:08:56.682 "8d7a2800-4309-4db1-8d2a-99738566cac7" 00:08:56.682 ], 00:08:56.682 "product_name": "Malloc disk", 00:08:56.682 "block_size": 512, 00:08:56.682 "num_blocks": 65536, 00:08:56.682 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:08:56.682 "assigned_rate_limits": { 00:08:56.682 "rw_ios_per_sec": 0, 00:08:56.682 "rw_mbytes_per_sec": 0, 00:08:56.682 "r_mbytes_per_sec": 0, 00:08:56.682 "w_mbytes_per_sec": 0 00:08:56.682 }, 00:08:56.682 "claimed": false, 00:08:56.682 "zoned": false, 00:08:56.682 "supported_io_types": { 00:08:56.682 "read": true, 00:08:56.682 "write": true, 00:08:56.682 "unmap": true, 00:08:56.682 "flush": true, 00:08:56.682 "reset": true, 00:08:56.682 "nvme_admin": false, 00:08:56.682 "nvme_io": false, 00:08:56.682 "nvme_io_md": false, 00:08:56.682 "write_zeroes": true, 00:08:56.682 "zcopy": true, 00:08:56.682 "get_zone_info": false, 00:08:56.682 "zone_management": false, 00:08:56.682 "zone_append": false, 00:08:56.682 "compare": false, 00:08:56.682 "compare_and_write": false, 00:08:56.682 "abort": true, 00:08:56.682 "seek_hole": false, 00:08:56.682 "seek_data": false, 00:08:56.682 "copy": true, 00:08:56.682 "nvme_iov_md": false 00:08:56.682 }, 00:08:56.682 "memory_domains": [ 00:08:56.682 { 00:08:56.682 "dma_device_id": "system", 00:08:56.682 "dma_device_type": 1 00:08:56.682 }, 00:08:56.682 { 00:08:56.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.682 "dma_device_type": 2 00:08:56.682 } 00:08:56.682 ], 00:08:56.682 "driver_specific": {} 00:08:56.682 } 00:08:56.682 ] 00:08:56.682 06:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:56.682 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:56.682 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:56.683 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:56.683 [2024-08-13 06:03:58.469893] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.683 [2024-08-13 06:03:58.470044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.683 [2024-08-13 06:03:58.470095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.683 [2024-08-13 06:03:58.471900] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:56.942 "name": "Existed_Raid", 00:08:56.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.942 "strip_size_kb": 64, 00:08:56.942 "state": "configuring", 00:08:56.942 "raid_level": "raid0", 00:08:56.942 "superblock": false, 00:08:56.942 "num_base_bdevs": 3, 00:08:56.942 "num_base_bdevs_discovered": 2, 00:08:56.942 "num_base_bdevs_operational": 3, 00:08:56.942 "base_bdevs_list": [ 00:08:56.942 { 00:08:56.942 "name": "BaseBdev1", 00:08:56.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.942 "is_configured": false, 00:08:56.942 "data_offset": 0, 00:08:56.942 "data_size": 0 00:08:56.942 }, 00:08:56.942 { 00:08:56.942 "name": "BaseBdev2", 00:08:56.942 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:08:56.942 "is_configured": true, 00:08:56.942 "data_offset": 0, 00:08:56.942 "data_size": 65536 00:08:56.942 }, 00:08:56.942 { 00:08:56.942 "name": "BaseBdev3", 00:08:56.942 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:08:56.942 "is_configured": true, 00:08:56.942 "data_offset": 0, 00:08:56.942 "data_size": 65536 00:08:56.942 } 00:08:56.942 ] 00:08:56.942 }' 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:56.942 06:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.523 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:08:57.782 [2024-08-13 06:03:59.428260] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.782 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.041 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:58.041 "name": "Existed_Raid", 00:08:58.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.041 "strip_size_kb": 64, 00:08:58.041 "state": "configuring", 00:08:58.041 "raid_level": "raid0", 00:08:58.041 "superblock": false, 00:08:58.041 "num_base_bdevs": 3, 00:08:58.041 "num_base_bdevs_discovered": 1, 00:08:58.041 "num_base_bdevs_operational": 3, 00:08:58.041 "base_bdevs_list": [ 00:08:58.041 { 00:08:58.041 "name": "BaseBdev1", 00:08:58.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.041 "is_configured": false, 00:08:58.041 "data_offset": 0, 00:08:58.041 "data_size": 0 00:08:58.041 }, 00:08:58.041 { 00:08:58.041 "name": null, 00:08:58.041 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:08:58.041 "is_configured": false, 00:08:58.041 "data_offset": 0, 00:08:58.041 "data_size": 65536 00:08:58.041 }, 00:08:58.041 { 00:08:58.041 "name": "BaseBdev3", 00:08:58.041 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:08:58.041 "is_configured": true, 00:08:58.041 "data_offset": 0, 00:08:58.041 "data_size": 65536 00:08:58.041 } 00:08:58.041 ] 00:08:58.041 }' 00:08:58.041 06:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:58.041 06:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.610 06:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.610 06:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.870 [2024-08-13 06:04:00.613267] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.870 BaseBdev1 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:58.870 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:59.129 06:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.389 [ 00:08:59.389 { 00:08:59.389 "name": "BaseBdev1", 00:08:59.389 "aliases": [ 00:08:59.389 "0f71f5ee-526c-4b88-a765-bcf28c93117d" 00:08:59.389 ], 00:08:59.389 "product_name": "Malloc disk", 00:08:59.389 "block_size": 512, 00:08:59.389 "num_blocks": 65536, 00:08:59.389 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:08:59.389 "assigned_rate_limits": { 00:08:59.389 "rw_ios_per_sec": 0, 00:08:59.389 "rw_mbytes_per_sec": 0, 00:08:59.389 "r_mbytes_per_sec": 0, 00:08:59.389 "w_mbytes_per_sec": 0 00:08:59.389 }, 00:08:59.389 "claimed": true, 00:08:59.389 "claim_type": "exclusive_write", 00:08:59.389 "zoned": false, 00:08:59.389 "supported_io_types": { 00:08:59.389 "read": true, 00:08:59.389 "write": true, 00:08:59.389 "unmap": true, 00:08:59.389 "flush": true, 00:08:59.389 "reset": true, 00:08:59.389 "nvme_admin": false, 00:08:59.389 "nvme_io": false, 00:08:59.389 "nvme_io_md": false, 00:08:59.389 "write_zeroes": true, 00:08:59.389 "zcopy": true, 00:08:59.389 "get_zone_info": false, 00:08:59.389 "zone_management": false, 00:08:59.389 "zone_append": false, 00:08:59.389 "compare": false, 00:08:59.389 "compare_and_write": false, 00:08:59.389 "abort": true, 00:08:59.389 "seek_hole": false, 00:08:59.389 "seek_data": false, 00:08:59.389 "copy": true, 00:08:59.389 "nvme_iov_md": false 00:08:59.389 }, 00:08:59.389 "memory_domains": [ 00:08:59.389 { 00:08:59.389 "dma_device_id": "system", 00:08:59.390 "dma_device_type": 1 00:08:59.390 }, 00:08:59.390 { 00:08:59.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.390 "dma_device_type": 2 00:08:59.390 } 00:08:59.390 ], 00:08:59.390 "driver_specific": {} 00:08:59.390 } 00:08:59.390 ] 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.390 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.649 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.649 "name": "Existed_Raid", 00:08:59.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.649 "strip_size_kb": 64, 00:08:59.649 "state": "configuring", 00:08:59.649 "raid_level": "raid0", 00:08:59.649 "superblock": false, 00:08:59.649 "num_base_bdevs": 3, 00:08:59.649 "num_base_bdevs_discovered": 2, 00:08:59.649 "num_base_bdevs_operational": 3, 00:08:59.649 "base_bdevs_list": [ 00:08:59.649 { 00:08:59.649 "name": "BaseBdev1", 00:08:59.649 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:08:59.649 "is_configured": true, 00:08:59.649 "data_offset": 0, 00:08:59.649 "data_size": 65536 00:08:59.649 }, 00:08:59.649 { 00:08:59.649 "name": null, 00:08:59.649 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:08:59.649 "is_configured": false, 00:08:59.649 "data_offset": 0, 00:08:59.649 "data_size": 65536 00:08:59.649 }, 00:08:59.649 { 00:08:59.649 "name": "BaseBdev3", 00:08:59.649 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:08:59.649 "is_configured": true, 00:08:59.649 "data_offset": 0, 00:08:59.649 "data_size": 65536 00:08:59.649 } 00:08:59.649 ] 00:08:59.649 }' 00:08:59.649 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.649 06:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.218 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.218 06:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:00.478 [2024-08-13 06:04:02.214689] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.478 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.739 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:00.739 "name": "Existed_Raid", 00:09:00.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.739 "strip_size_kb": 64, 00:09:00.739 "state": "configuring", 00:09:00.739 "raid_level": "raid0", 00:09:00.739 "superblock": false, 00:09:00.739 "num_base_bdevs": 3, 00:09:00.739 "num_base_bdevs_discovered": 1, 00:09:00.739 "num_base_bdevs_operational": 3, 00:09:00.739 "base_bdevs_list": [ 00:09:00.739 { 00:09:00.739 "name": "BaseBdev1", 00:09:00.739 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:00.739 "is_configured": true, 00:09:00.739 "data_offset": 0, 00:09:00.739 "data_size": 65536 00:09:00.739 }, 00:09:00.739 { 00:09:00.739 "name": null, 00:09:00.739 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:00.739 "is_configured": false, 00:09:00.739 "data_offset": 0, 00:09:00.739 "data_size": 65536 00:09:00.739 }, 00:09:00.739 { 00:09:00.739 "name": null, 00:09:00.739 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:00.739 "is_configured": false, 00:09:00.739 "data_offset": 0, 00:09:00.739 "data_size": 65536 00:09:00.739 } 00:09:00.739 ] 00:09:00.739 }' 00:09:00.739 06:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:00.739 06:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.307 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.307 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.567 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:01.567 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:01.826 [2024-08-13 06:04:03.480565] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.826 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.826 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:01.826 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:01.826 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.827 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.084 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:02.084 "name": "Existed_Raid", 00:09:02.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.084 "strip_size_kb": 64, 00:09:02.084 "state": "configuring", 00:09:02.084 "raid_level": "raid0", 00:09:02.084 "superblock": false, 00:09:02.084 "num_base_bdevs": 3, 00:09:02.084 "num_base_bdevs_discovered": 2, 00:09:02.084 "num_base_bdevs_operational": 3, 00:09:02.084 "base_bdevs_list": [ 00:09:02.084 { 00:09:02.084 "name": "BaseBdev1", 00:09:02.084 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:02.084 "is_configured": true, 00:09:02.084 "data_offset": 0, 00:09:02.084 "data_size": 65536 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "name": null, 00:09:02.084 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:02.084 "is_configured": false, 00:09:02.084 "data_offset": 0, 00:09:02.084 "data_size": 65536 00:09:02.084 }, 00:09:02.084 { 00:09:02.084 "name": "BaseBdev3", 00:09:02.084 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:02.084 "is_configured": true, 00:09:02.084 "data_offset": 0, 00:09:02.084 "data_size": 65536 00:09:02.084 } 00:09:02.084 ] 00:09:02.084 }' 00:09:02.084 06:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:02.084 06:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.653 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.653 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.914 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:02.914 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:02.914 [2024-08-13 06:04:04.690486] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:03.175 "name": "Existed_Raid", 00:09:03.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.175 "strip_size_kb": 64, 00:09:03.175 "state": "configuring", 00:09:03.175 "raid_level": "raid0", 00:09:03.175 "superblock": false, 00:09:03.175 "num_base_bdevs": 3, 00:09:03.175 "num_base_bdevs_discovered": 1, 00:09:03.175 "num_base_bdevs_operational": 3, 00:09:03.175 "base_bdevs_list": [ 00:09:03.175 { 00:09:03.175 "name": null, 00:09:03.175 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:03.175 "is_configured": false, 00:09:03.175 "data_offset": 0, 00:09:03.175 "data_size": 65536 00:09:03.175 }, 00:09:03.175 { 00:09:03.175 "name": null, 00:09:03.175 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:03.175 "is_configured": false, 00:09:03.175 "data_offset": 0, 00:09:03.175 "data_size": 65536 00:09:03.175 }, 00:09:03.175 { 00:09:03.175 "name": "BaseBdev3", 00:09:03.175 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:03.175 "is_configured": true, 00:09:03.175 "data_offset": 0, 00:09:03.175 "data_size": 65536 00:09:03.175 } 00:09:03.175 ] 00:09:03.175 }' 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:03.175 06:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.744 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.744 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.003 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:04.003 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.263 [2024-08-13 06:04:05.879089] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.263 06:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.522 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:04.522 "name": "Existed_Raid", 00:09:04.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.522 "strip_size_kb": 64, 00:09:04.522 "state": "configuring", 00:09:04.522 "raid_level": "raid0", 00:09:04.522 "superblock": false, 00:09:04.522 "num_base_bdevs": 3, 00:09:04.522 "num_base_bdevs_discovered": 2, 00:09:04.522 "num_base_bdevs_operational": 3, 00:09:04.522 "base_bdevs_list": [ 00:09:04.522 { 00:09:04.522 "name": null, 00:09:04.522 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:04.522 "is_configured": false, 00:09:04.522 "data_offset": 0, 00:09:04.522 "data_size": 65536 00:09:04.522 }, 00:09:04.522 { 00:09:04.522 "name": "BaseBdev2", 00:09:04.522 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:04.522 "is_configured": true, 00:09:04.522 "data_offset": 0, 00:09:04.522 "data_size": 65536 00:09:04.522 }, 00:09:04.522 { 00:09:04.522 "name": "BaseBdev3", 00:09:04.522 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:04.522 "is_configured": true, 00:09:04.522 "data_offset": 0, 00:09:04.522 "data_size": 65536 00:09:04.522 } 00:09:04.522 ] 00:09:04.522 }' 00:09:04.522 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:04.522 06:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.090 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.090 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.090 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:05.090 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.090 06:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.373 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0f71f5ee-526c-4b88-a765-bcf28c93117d 00:09:05.634 [2024-08-13 06:04:07.183752] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.634 [2024-08-13 06:04:07.183875] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:05.634 [2024-08-13 06:04:07.183898] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:05.634 [2024-08-13 06:04:07.184168] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:05.634 [2024-08-13 06:04:07.184328] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:05.634 [2024-08-13 06:04:07.184370] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:05.634 [2024-08-13 06:04:07.184589] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.634 NewBaseBdev 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:05.634 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.893 [ 00:09:05.893 { 00:09:05.893 "name": "NewBaseBdev", 00:09:05.893 "aliases": [ 00:09:05.893 "0f71f5ee-526c-4b88-a765-bcf28c93117d" 00:09:05.893 ], 00:09:05.893 "product_name": "Malloc disk", 00:09:05.893 "block_size": 512, 00:09:05.893 "num_blocks": 65536, 00:09:05.893 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:05.893 "assigned_rate_limits": { 00:09:05.893 "rw_ios_per_sec": 0, 00:09:05.893 "rw_mbytes_per_sec": 0, 00:09:05.893 "r_mbytes_per_sec": 0, 00:09:05.893 "w_mbytes_per_sec": 0 00:09:05.893 }, 00:09:05.893 "claimed": true, 00:09:05.893 "claim_type": "exclusive_write", 00:09:05.893 "zoned": false, 00:09:05.893 "supported_io_types": { 00:09:05.893 "read": true, 00:09:05.893 "write": true, 00:09:05.893 "unmap": true, 00:09:05.893 "flush": true, 00:09:05.893 "reset": true, 00:09:05.893 "nvme_admin": false, 00:09:05.893 "nvme_io": false, 00:09:05.893 "nvme_io_md": false, 00:09:05.893 "write_zeroes": true, 00:09:05.893 "zcopy": true, 00:09:05.893 "get_zone_info": false, 00:09:05.893 "zone_management": false, 00:09:05.893 "zone_append": false, 00:09:05.893 "compare": false, 00:09:05.893 "compare_and_write": false, 00:09:05.893 "abort": true, 00:09:05.893 "seek_hole": false, 00:09:05.893 "seek_data": false, 00:09:05.893 "copy": true, 00:09:05.893 "nvme_iov_md": false 00:09:05.893 }, 00:09:05.893 "memory_domains": [ 00:09:05.893 { 00:09:05.893 "dma_device_id": "system", 00:09:05.893 "dma_device_type": 1 00:09:05.893 }, 00:09:05.893 { 00:09:05.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.893 "dma_device_type": 2 00:09:05.893 } 00:09:05.893 ], 00:09:05.893 "driver_specific": {} 00:09:05.893 } 00:09:05.893 ] 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.893 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.152 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:06.152 "name": "Existed_Raid", 00:09:06.152 "uuid": "2df6a45c-10ec-4a09-bd47-dfa4f0aa0eb6", 00:09:06.152 "strip_size_kb": 64, 00:09:06.152 "state": "online", 00:09:06.152 "raid_level": "raid0", 00:09:06.152 "superblock": false, 00:09:06.152 "num_base_bdevs": 3, 00:09:06.152 "num_base_bdevs_discovered": 3, 00:09:06.152 "num_base_bdevs_operational": 3, 00:09:06.152 "base_bdevs_list": [ 00:09:06.152 { 00:09:06.152 "name": "NewBaseBdev", 00:09:06.152 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:06.152 "is_configured": true, 00:09:06.152 "data_offset": 0, 00:09:06.152 "data_size": 65536 00:09:06.152 }, 00:09:06.152 { 00:09:06.152 "name": "BaseBdev2", 00:09:06.152 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:06.152 "is_configured": true, 00:09:06.152 "data_offset": 0, 00:09:06.152 "data_size": 65536 00:09:06.152 }, 00:09:06.152 { 00:09:06.152 "name": "BaseBdev3", 00:09:06.152 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:06.152 "is_configured": true, 00:09:06.152 "data_offset": 0, 00:09:06.152 "data_size": 65536 00:09:06.152 } 00:09:06.152 ] 00:09:06.152 }' 00:09:06.152 06:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:06.152 06:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:06.720 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:06.980 [2024-08-13 06:04:08.537810] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.980 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:06.980 "name": "Existed_Raid", 00:09:06.980 "aliases": [ 00:09:06.980 "2df6a45c-10ec-4a09-bd47-dfa4f0aa0eb6" 00:09:06.980 ], 00:09:06.980 "product_name": "Raid Volume", 00:09:06.980 "block_size": 512, 00:09:06.980 "num_blocks": 196608, 00:09:06.980 "uuid": "2df6a45c-10ec-4a09-bd47-dfa4f0aa0eb6", 00:09:06.980 "assigned_rate_limits": { 00:09:06.980 "rw_ios_per_sec": 0, 00:09:06.980 "rw_mbytes_per_sec": 0, 00:09:06.980 "r_mbytes_per_sec": 0, 00:09:06.980 "w_mbytes_per_sec": 0 00:09:06.980 }, 00:09:06.980 "claimed": false, 00:09:06.980 "zoned": false, 00:09:06.980 "supported_io_types": { 00:09:06.980 "read": true, 00:09:06.980 "write": true, 00:09:06.980 "unmap": true, 00:09:06.980 "flush": true, 00:09:06.980 "reset": true, 00:09:06.980 "nvme_admin": false, 00:09:06.980 "nvme_io": false, 00:09:06.980 "nvme_io_md": false, 00:09:06.980 "write_zeroes": true, 00:09:06.980 "zcopy": false, 00:09:06.980 "get_zone_info": false, 00:09:06.980 "zone_management": false, 00:09:06.980 "zone_append": false, 00:09:06.980 "compare": false, 00:09:06.980 "compare_and_write": false, 00:09:06.980 "abort": false, 00:09:06.980 "seek_hole": false, 00:09:06.980 "seek_data": false, 00:09:06.980 "copy": false, 00:09:06.980 "nvme_iov_md": false 00:09:06.980 }, 00:09:06.980 "memory_domains": [ 00:09:06.980 { 00:09:06.980 "dma_device_id": "system", 00:09:06.980 "dma_device_type": 1 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.980 "dma_device_type": 2 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "dma_device_id": "system", 00:09:06.980 "dma_device_type": 1 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.980 "dma_device_type": 2 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "dma_device_id": "system", 00:09:06.980 "dma_device_type": 1 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.980 "dma_device_type": 2 00:09:06.980 } 00:09:06.980 ], 00:09:06.980 "driver_specific": { 00:09:06.980 "raid": { 00:09:06.980 "uuid": "2df6a45c-10ec-4a09-bd47-dfa4f0aa0eb6", 00:09:06.980 "strip_size_kb": 64, 00:09:06.980 "state": "online", 00:09:06.980 "raid_level": "raid0", 00:09:06.980 "superblock": false, 00:09:06.980 "num_base_bdevs": 3, 00:09:06.980 "num_base_bdevs_discovered": 3, 00:09:06.980 "num_base_bdevs_operational": 3, 00:09:06.980 "base_bdevs_list": [ 00:09:06.980 { 00:09:06.980 "name": "NewBaseBdev", 00:09:06.980 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:06.980 "is_configured": true, 00:09:06.980 "data_offset": 0, 00:09:06.980 "data_size": 65536 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "name": "BaseBdev2", 00:09:06.980 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:06.980 "is_configured": true, 00:09:06.980 "data_offset": 0, 00:09:06.980 "data_size": 65536 00:09:06.980 }, 00:09:06.980 { 00:09:06.980 "name": "BaseBdev3", 00:09:06.980 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:06.980 "is_configured": true, 00:09:06.980 "data_offset": 0, 00:09:06.980 "data_size": 65536 00:09:06.980 } 00:09:06.980 ] 00:09:06.980 } 00:09:06.980 } 00:09:06.980 }' 00:09:06.980 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.980 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:06.980 BaseBdev2 00:09:06.980 BaseBdev3' 00:09:06.980 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:06.980 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:06.980 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:07.239 "name": "NewBaseBdev", 00:09:07.239 "aliases": [ 00:09:07.239 "0f71f5ee-526c-4b88-a765-bcf28c93117d" 00:09:07.239 ], 00:09:07.239 "product_name": "Malloc disk", 00:09:07.239 "block_size": 512, 00:09:07.239 "num_blocks": 65536, 00:09:07.239 "uuid": "0f71f5ee-526c-4b88-a765-bcf28c93117d", 00:09:07.239 "assigned_rate_limits": { 00:09:07.239 "rw_ios_per_sec": 0, 00:09:07.239 "rw_mbytes_per_sec": 0, 00:09:07.239 "r_mbytes_per_sec": 0, 00:09:07.239 "w_mbytes_per_sec": 0 00:09:07.239 }, 00:09:07.239 "claimed": true, 00:09:07.239 "claim_type": "exclusive_write", 00:09:07.239 "zoned": false, 00:09:07.239 "supported_io_types": { 00:09:07.239 "read": true, 00:09:07.239 "write": true, 00:09:07.239 "unmap": true, 00:09:07.239 "flush": true, 00:09:07.239 "reset": true, 00:09:07.239 "nvme_admin": false, 00:09:07.239 "nvme_io": false, 00:09:07.239 "nvme_io_md": false, 00:09:07.239 "write_zeroes": true, 00:09:07.239 "zcopy": true, 00:09:07.239 "get_zone_info": false, 00:09:07.239 "zone_management": false, 00:09:07.239 "zone_append": false, 00:09:07.239 "compare": false, 00:09:07.239 "compare_and_write": false, 00:09:07.239 "abort": true, 00:09:07.239 "seek_hole": false, 00:09:07.239 "seek_data": false, 00:09:07.239 "copy": true, 00:09:07.239 "nvme_iov_md": false 00:09:07.239 }, 00:09:07.239 "memory_domains": [ 00:09:07.239 { 00:09:07.239 "dma_device_id": "system", 00:09:07.239 "dma_device_type": 1 00:09:07.239 }, 00:09:07.239 { 00:09:07.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.239 "dma_device_type": 2 00:09:07.239 } 00:09:07.239 ], 00:09:07.239 "driver_specific": {} 00:09:07.239 }' 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:07.239 06:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:07.239 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:07.239 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:07.498 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:07.498 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:07.498 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:07.498 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:07.498 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:07.758 "name": "BaseBdev2", 00:09:07.758 "aliases": [ 00:09:07.758 "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa" 00:09:07.758 ], 00:09:07.758 "product_name": "Malloc disk", 00:09:07.758 "block_size": 512, 00:09:07.758 "num_blocks": 65536, 00:09:07.758 "uuid": "9d1d2d9b-9677-45e0-bbcd-35e73fab09aa", 00:09:07.758 "assigned_rate_limits": { 00:09:07.758 "rw_ios_per_sec": 0, 00:09:07.758 "rw_mbytes_per_sec": 0, 00:09:07.758 "r_mbytes_per_sec": 0, 00:09:07.758 "w_mbytes_per_sec": 0 00:09:07.758 }, 00:09:07.758 "claimed": true, 00:09:07.758 "claim_type": "exclusive_write", 00:09:07.758 "zoned": false, 00:09:07.758 "supported_io_types": { 00:09:07.758 "read": true, 00:09:07.758 "write": true, 00:09:07.758 "unmap": true, 00:09:07.758 "flush": true, 00:09:07.758 "reset": true, 00:09:07.758 "nvme_admin": false, 00:09:07.758 "nvme_io": false, 00:09:07.758 "nvme_io_md": false, 00:09:07.758 "write_zeroes": true, 00:09:07.758 "zcopy": true, 00:09:07.758 "get_zone_info": false, 00:09:07.758 "zone_management": false, 00:09:07.758 "zone_append": false, 00:09:07.758 "compare": false, 00:09:07.758 "compare_and_write": false, 00:09:07.758 "abort": true, 00:09:07.758 "seek_hole": false, 00:09:07.758 "seek_data": false, 00:09:07.758 "copy": true, 00:09:07.758 "nvme_iov_md": false 00:09:07.758 }, 00:09:07.758 "memory_domains": [ 00:09:07.758 { 00:09:07.758 "dma_device_id": "system", 00:09:07.758 "dma_device_type": 1 00:09:07.758 }, 00:09:07.758 { 00:09:07.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.758 "dma_device_type": 2 00:09:07.758 } 00:09:07.758 ], 00:09:07.758 "driver_specific": {} 00:09:07.758 }' 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:07.758 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:08.018 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:08.018 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:08.018 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:08.018 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:08.018 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:08.277 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:08.277 "name": "BaseBdev3", 00:09:08.277 "aliases": [ 00:09:08.277 "8d7a2800-4309-4db1-8d2a-99738566cac7" 00:09:08.277 ], 00:09:08.277 "product_name": "Malloc disk", 00:09:08.277 "block_size": 512, 00:09:08.277 "num_blocks": 65536, 00:09:08.277 "uuid": "8d7a2800-4309-4db1-8d2a-99738566cac7", 00:09:08.277 "assigned_rate_limits": { 00:09:08.277 "rw_ios_per_sec": 0, 00:09:08.277 "rw_mbytes_per_sec": 0, 00:09:08.277 "r_mbytes_per_sec": 0, 00:09:08.277 "w_mbytes_per_sec": 0 00:09:08.277 }, 00:09:08.277 "claimed": true, 00:09:08.277 "claim_type": "exclusive_write", 00:09:08.277 "zoned": false, 00:09:08.277 "supported_io_types": { 00:09:08.277 "read": true, 00:09:08.277 "write": true, 00:09:08.277 "unmap": true, 00:09:08.277 "flush": true, 00:09:08.277 "reset": true, 00:09:08.277 "nvme_admin": false, 00:09:08.277 "nvme_io": false, 00:09:08.277 "nvme_io_md": false, 00:09:08.277 "write_zeroes": true, 00:09:08.277 "zcopy": true, 00:09:08.277 "get_zone_info": false, 00:09:08.277 "zone_management": false, 00:09:08.277 "zone_append": false, 00:09:08.277 "compare": false, 00:09:08.277 "compare_and_write": false, 00:09:08.277 "abort": true, 00:09:08.277 "seek_hole": false, 00:09:08.277 "seek_data": false, 00:09:08.277 "copy": true, 00:09:08.277 "nvme_iov_md": false 00:09:08.277 }, 00:09:08.277 "memory_domains": [ 00:09:08.277 { 00:09:08.277 "dma_device_id": "system", 00:09:08.277 "dma_device_type": 1 00:09:08.277 }, 00:09:08.277 { 00:09:08.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.277 "dma_device_type": 2 00:09:08.277 } 00:09:08.277 ], 00:09:08.277 "driver_specific": {} 00:09:08.277 }' 00:09:08.277 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:08.277 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:08.277 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:08.277 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:08.277 06:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:08.277 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:08.277 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:08.277 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:08.537 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:08.537 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:08.537 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:08.537 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:08.537 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:08.797 [2024-08-13 06:04:10.374337] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.797 [2024-08-13 06:04:10.374446] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.797 [2024-08-13 06:04:10.374533] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.797 [2024-08-13 06:04:10.374602] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.797 [2024-08-13 06:04:10.374611] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 75169 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 75169 ']' 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 75169 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75169 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75169' 00:09:08.797 killing process with pid 75169 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 75169 00:09:08.797 [2024-08-13 06:04:10.420315] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.797 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 75169 00:09:08.797 [2024-08-13 06:04:10.451214] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:09.057 00:09:09.057 real 0m25.411s 00:09:09.057 user 0m47.092s 00:09:09.057 sys 0m3.926s 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.057 ************************************ 00:09:09.057 END TEST raid_state_function_test 00:09:09.057 ************************************ 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.057 06:04:10 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:09.057 06:04:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:09.057 06:04:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.057 06:04:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.057 ************************************ 00:09:09.057 START TEST raid_state_function_test_sb 00:09:09.057 ************************************ 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:09.057 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=76078 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 76078' 00:09:09.058 Process raid pid: 76078 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 76078 /var/tmp/spdk-raid.sock 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 76078 ']' 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:09.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:09.058 06:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.318 [2024-08-13 06:04:10.855712] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:09:09.318 [2024-08-13 06:04:10.855830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.318 [2024-08-13 06:04:11.002293] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.318 [2024-08-13 06:04:11.047924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.318 [2024-08-13 06:04:11.090440] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.318 [2024-08-13 06:04:11.090475] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:10.255 [2024-08-13 06:04:11.850137] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.255 [2024-08-13 06:04:11.850264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.255 [2024-08-13 06:04:11.850294] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.255 [2024-08-13 06:04:11.850315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.255 [2024-08-13 06:04:11.850337] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.255 [2024-08-13 06:04:11.850356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.255 06:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.515 06:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:10.515 "name": "Existed_Raid", 00:09:10.515 "uuid": "6e201264-2110-41d0-8799-bc85eb186551", 00:09:10.515 "strip_size_kb": 64, 00:09:10.515 "state": "configuring", 00:09:10.515 "raid_level": "raid0", 00:09:10.515 "superblock": true, 00:09:10.515 "num_base_bdevs": 3, 00:09:10.515 "num_base_bdevs_discovered": 0, 00:09:10.515 "num_base_bdevs_operational": 3, 00:09:10.515 "base_bdevs_list": [ 00:09:10.515 { 00:09:10.515 "name": "BaseBdev1", 00:09:10.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.515 "is_configured": false, 00:09:10.515 "data_offset": 0, 00:09:10.516 "data_size": 0 00:09:10.516 }, 00:09:10.516 { 00:09:10.516 "name": "BaseBdev2", 00:09:10.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.516 "is_configured": false, 00:09:10.516 "data_offset": 0, 00:09:10.516 "data_size": 0 00:09:10.516 }, 00:09:10.516 { 00:09:10.516 "name": "BaseBdev3", 00:09:10.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.516 "is_configured": false, 00:09:10.516 "data_offset": 0, 00:09:10.516 "data_size": 0 00:09:10.516 } 00:09:10.516 ] 00:09:10.516 }' 00:09:10.516 06:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:10.516 06:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.084 06:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:11.084 [2024-08-13 06:04:12.792383] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.084 [2024-08-13 06:04:12.792421] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:11.084 06:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:11.344 [2024-08-13 06:04:12.996083] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.344 [2024-08-13 06:04:12.996129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.344 [2024-08-13 06:04:12.996139] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.344 [2024-08-13 06:04:12.996146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.344 [2024-08-13 06:04:12.996153] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.344 [2024-08-13 06:04:12.996160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.344 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.605 [2024-08-13 06:04:13.204525] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.605 BaseBdev1 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:11.605 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.865 [ 00:09:11.865 { 00:09:11.865 "name": "BaseBdev1", 00:09:11.865 "aliases": [ 00:09:11.865 "e30428df-4158-438a-bb54-17aee3d5ad42" 00:09:11.865 ], 00:09:11.865 "product_name": "Malloc disk", 00:09:11.865 "block_size": 512, 00:09:11.865 "num_blocks": 65536, 00:09:11.865 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:11.865 "assigned_rate_limits": { 00:09:11.865 "rw_ios_per_sec": 0, 00:09:11.865 "rw_mbytes_per_sec": 0, 00:09:11.865 "r_mbytes_per_sec": 0, 00:09:11.865 "w_mbytes_per_sec": 0 00:09:11.865 }, 00:09:11.865 "claimed": true, 00:09:11.865 "claim_type": "exclusive_write", 00:09:11.865 "zoned": false, 00:09:11.865 "supported_io_types": { 00:09:11.865 "read": true, 00:09:11.865 "write": true, 00:09:11.865 "unmap": true, 00:09:11.865 "flush": true, 00:09:11.865 "reset": true, 00:09:11.865 "nvme_admin": false, 00:09:11.865 "nvme_io": false, 00:09:11.865 "nvme_io_md": false, 00:09:11.865 "write_zeroes": true, 00:09:11.865 "zcopy": true, 00:09:11.865 "get_zone_info": false, 00:09:11.865 "zone_management": false, 00:09:11.865 "zone_append": false, 00:09:11.865 "compare": false, 00:09:11.865 "compare_and_write": false, 00:09:11.865 "abort": true, 00:09:11.865 "seek_hole": false, 00:09:11.865 "seek_data": false, 00:09:11.865 "copy": true, 00:09:11.865 "nvme_iov_md": false 00:09:11.865 }, 00:09:11.865 "memory_domains": [ 00:09:11.865 { 00:09:11.865 "dma_device_id": "system", 00:09:11.865 "dma_device_type": 1 00:09:11.865 }, 00:09:11.865 { 00:09:11.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.865 "dma_device_type": 2 00:09:11.865 } 00:09:11.865 ], 00:09:11.865 "driver_specific": {} 00:09:11.865 } 00:09:11.865 ] 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.865 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.131 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:12.131 "name": "Existed_Raid", 00:09:12.131 "uuid": "234b406a-5280-42b3-8bc6-d62866699677", 00:09:12.131 "strip_size_kb": 64, 00:09:12.131 "state": "configuring", 00:09:12.131 "raid_level": "raid0", 00:09:12.131 "superblock": true, 00:09:12.131 "num_base_bdevs": 3, 00:09:12.131 "num_base_bdevs_discovered": 1, 00:09:12.131 "num_base_bdevs_operational": 3, 00:09:12.131 "base_bdevs_list": [ 00:09:12.131 { 00:09:12.131 "name": "BaseBdev1", 00:09:12.131 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:12.131 "is_configured": true, 00:09:12.131 "data_offset": 2048, 00:09:12.131 "data_size": 63488 00:09:12.131 }, 00:09:12.131 { 00:09:12.131 "name": "BaseBdev2", 00:09:12.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.131 "is_configured": false, 00:09:12.131 "data_offset": 0, 00:09:12.131 "data_size": 0 00:09:12.131 }, 00:09:12.131 { 00:09:12.131 "name": "BaseBdev3", 00:09:12.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.131 "is_configured": false, 00:09:12.131 "data_offset": 0, 00:09:12.131 "data_size": 0 00:09:12.131 } 00:09:12.131 ] 00:09:12.131 }' 00:09:12.131 06:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:12.131 06:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.701 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:12.960 [2024-08-13 06:04:14.538256] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.960 [2024-08-13 06:04:14.538321] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:12.960 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:12.960 [2024-08-13 06:04:14.742100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.960 [2024-08-13 06:04:14.743842] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.960 [2024-08-13 06:04:14.743885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.960 [2024-08-13 06:04:14.743897] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.961 [2024-08-13 06:04:14.743905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.220 "name": "Existed_Raid", 00:09:13.220 "uuid": "ead3dd24-9670-4015-9695-7e1d3d8e00eb", 00:09:13.220 "strip_size_kb": 64, 00:09:13.220 "state": "configuring", 00:09:13.220 "raid_level": "raid0", 00:09:13.220 "superblock": true, 00:09:13.220 "num_base_bdevs": 3, 00:09:13.220 "num_base_bdevs_discovered": 1, 00:09:13.220 "num_base_bdevs_operational": 3, 00:09:13.220 "base_bdevs_list": [ 00:09:13.220 { 00:09:13.220 "name": "BaseBdev1", 00:09:13.220 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:13.220 "is_configured": true, 00:09:13.220 "data_offset": 2048, 00:09:13.220 "data_size": 63488 00:09:13.220 }, 00:09:13.220 { 00:09:13.220 "name": "BaseBdev2", 00:09:13.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.220 "is_configured": false, 00:09:13.220 "data_offset": 0, 00:09:13.220 "data_size": 0 00:09:13.220 }, 00:09:13.220 { 00:09:13.220 "name": "BaseBdev3", 00:09:13.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.220 "is_configured": false, 00:09:13.220 "data_offset": 0, 00:09:13.220 "data_size": 0 00:09:13.220 } 00:09:13.220 ] 00:09:13.220 }' 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.220 06:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.789 06:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.049 [2024-08-13 06:04:15.724628] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.049 BaseBdev2 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:14.049 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:14.308 06:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.308 [ 00:09:14.308 { 00:09:14.308 "name": "BaseBdev2", 00:09:14.308 "aliases": [ 00:09:14.308 "d692f1e2-27b0-4ef2-9edd-859c209042d0" 00:09:14.308 ], 00:09:14.308 "product_name": "Malloc disk", 00:09:14.308 "block_size": 512, 00:09:14.308 "num_blocks": 65536, 00:09:14.308 "uuid": "d692f1e2-27b0-4ef2-9edd-859c209042d0", 00:09:14.308 "assigned_rate_limits": { 00:09:14.308 "rw_ios_per_sec": 0, 00:09:14.308 "rw_mbytes_per_sec": 0, 00:09:14.308 "r_mbytes_per_sec": 0, 00:09:14.308 "w_mbytes_per_sec": 0 00:09:14.308 }, 00:09:14.308 "claimed": true, 00:09:14.308 "claim_type": "exclusive_write", 00:09:14.308 "zoned": false, 00:09:14.308 "supported_io_types": { 00:09:14.308 "read": true, 00:09:14.308 "write": true, 00:09:14.308 "unmap": true, 00:09:14.308 "flush": true, 00:09:14.308 "reset": true, 00:09:14.308 "nvme_admin": false, 00:09:14.308 "nvme_io": false, 00:09:14.308 "nvme_io_md": false, 00:09:14.308 "write_zeroes": true, 00:09:14.308 "zcopy": true, 00:09:14.308 "get_zone_info": false, 00:09:14.308 "zone_management": false, 00:09:14.308 "zone_append": false, 00:09:14.308 "compare": false, 00:09:14.308 "compare_and_write": false, 00:09:14.308 "abort": true, 00:09:14.308 "seek_hole": false, 00:09:14.308 "seek_data": false, 00:09:14.308 "copy": true, 00:09:14.308 "nvme_iov_md": false 00:09:14.308 }, 00:09:14.308 "memory_domains": [ 00:09:14.308 { 00:09:14.308 "dma_device_id": "system", 00:09:14.308 "dma_device_type": 1 00:09:14.308 }, 00:09:14.308 { 00:09:14.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.308 "dma_device_type": 2 00:09:14.308 } 00:09:14.308 ], 00:09:14.308 "driver_specific": {} 00:09:14.308 } 00:09:14.308 ] 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:14.567 "name": "Existed_Raid", 00:09:14.567 "uuid": "ead3dd24-9670-4015-9695-7e1d3d8e00eb", 00:09:14.567 "strip_size_kb": 64, 00:09:14.567 "state": "configuring", 00:09:14.567 "raid_level": "raid0", 00:09:14.567 "superblock": true, 00:09:14.567 "num_base_bdevs": 3, 00:09:14.567 "num_base_bdevs_discovered": 2, 00:09:14.567 "num_base_bdevs_operational": 3, 00:09:14.567 "base_bdevs_list": [ 00:09:14.567 { 00:09:14.567 "name": "BaseBdev1", 00:09:14.567 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:14.567 "is_configured": true, 00:09:14.567 "data_offset": 2048, 00:09:14.567 "data_size": 63488 00:09:14.567 }, 00:09:14.567 { 00:09:14.567 "name": "BaseBdev2", 00:09:14.567 "uuid": "d692f1e2-27b0-4ef2-9edd-859c209042d0", 00:09:14.567 "is_configured": true, 00:09:14.567 "data_offset": 2048, 00:09:14.567 "data_size": 63488 00:09:14.567 }, 00:09:14.567 { 00:09:14.567 "name": "BaseBdev3", 00:09:14.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.567 "is_configured": false, 00:09:14.567 "data_offset": 0, 00:09:14.567 "data_size": 0 00:09:14.567 } 00:09:14.567 ] 00:09:14.567 }' 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:14.567 06:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.148 06:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.408 [2024-08-13 06:04:17.049501] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.408 [2024-08-13 06:04:17.049684] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:15.408 [2024-08-13 06:04:17.049709] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.408 [2024-08-13 06:04:17.050005] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:15.408 [2024-08-13 06:04:17.050149] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:15.408 [2024-08-13 06:04:17.050177] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:15.408 [2024-08-13 06:04:17.050295] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.408 BaseBdev3 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:15.408 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:15.670 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.930 [ 00:09:15.930 { 00:09:15.930 "name": "BaseBdev3", 00:09:15.930 "aliases": [ 00:09:15.930 "9326a92c-bab0-47bf-9397-9de2d5c3784a" 00:09:15.930 ], 00:09:15.930 "product_name": "Malloc disk", 00:09:15.930 "block_size": 512, 00:09:15.930 "num_blocks": 65536, 00:09:15.930 "uuid": "9326a92c-bab0-47bf-9397-9de2d5c3784a", 00:09:15.930 "assigned_rate_limits": { 00:09:15.930 "rw_ios_per_sec": 0, 00:09:15.930 "rw_mbytes_per_sec": 0, 00:09:15.930 "r_mbytes_per_sec": 0, 00:09:15.930 "w_mbytes_per_sec": 0 00:09:15.930 }, 00:09:15.930 "claimed": true, 00:09:15.930 "claim_type": "exclusive_write", 00:09:15.930 "zoned": false, 00:09:15.930 "supported_io_types": { 00:09:15.930 "read": true, 00:09:15.930 "write": true, 00:09:15.930 "unmap": true, 00:09:15.930 "flush": true, 00:09:15.930 "reset": true, 00:09:15.930 "nvme_admin": false, 00:09:15.930 "nvme_io": false, 00:09:15.930 "nvme_io_md": false, 00:09:15.930 "write_zeroes": true, 00:09:15.930 "zcopy": true, 00:09:15.930 "get_zone_info": false, 00:09:15.930 "zone_management": false, 00:09:15.930 "zone_append": false, 00:09:15.930 "compare": false, 00:09:15.930 "compare_and_write": false, 00:09:15.930 "abort": true, 00:09:15.930 "seek_hole": false, 00:09:15.930 "seek_data": false, 00:09:15.930 "copy": true, 00:09:15.930 "nvme_iov_md": false 00:09:15.930 }, 00:09:15.930 "memory_domains": [ 00:09:15.930 { 00:09:15.930 "dma_device_id": "system", 00:09:15.930 "dma_device_type": 1 00:09:15.930 }, 00:09:15.930 { 00:09:15.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.930 "dma_device_type": 2 00:09:15.930 } 00:09:15.930 ], 00:09:15.930 "driver_specific": {} 00:09:15.930 } 00:09:15.930 ] 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.930 "name": "Existed_Raid", 00:09:15.930 "uuid": "ead3dd24-9670-4015-9695-7e1d3d8e00eb", 00:09:15.930 "strip_size_kb": 64, 00:09:15.930 "state": "online", 00:09:15.930 "raid_level": "raid0", 00:09:15.930 "superblock": true, 00:09:15.930 "num_base_bdevs": 3, 00:09:15.930 "num_base_bdevs_discovered": 3, 00:09:15.930 "num_base_bdevs_operational": 3, 00:09:15.930 "base_bdevs_list": [ 00:09:15.930 { 00:09:15.930 "name": "BaseBdev1", 00:09:15.930 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:15.930 "is_configured": true, 00:09:15.930 "data_offset": 2048, 00:09:15.930 "data_size": 63488 00:09:15.930 }, 00:09:15.930 { 00:09:15.930 "name": "BaseBdev2", 00:09:15.930 "uuid": "d692f1e2-27b0-4ef2-9edd-859c209042d0", 00:09:15.930 "is_configured": true, 00:09:15.930 "data_offset": 2048, 00:09:15.930 "data_size": 63488 00:09:15.930 }, 00:09:15.930 { 00:09:15.930 "name": "BaseBdev3", 00:09:15.930 "uuid": "9326a92c-bab0-47bf-9397-9de2d5c3784a", 00:09:15.930 "is_configured": true, 00:09:15.930 "data_offset": 2048, 00:09:15.930 "data_size": 63488 00:09:15.930 } 00:09:15.930 ] 00:09:15.930 }' 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.930 06:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:16.498 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:16.757 [2024-08-13 06:04:18.415554] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.757 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:16.757 "name": "Existed_Raid", 00:09:16.757 "aliases": [ 00:09:16.757 "ead3dd24-9670-4015-9695-7e1d3d8e00eb" 00:09:16.757 ], 00:09:16.757 "product_name": "Raid Volume", 00:09:16.757 "block_size": 512, 00:09:16.757 "num_blocks": 190464, 00:09:16.757 "uuid": "ead3dd24-9670-4015-9695-7e1d3d8e00eb", 00:09:16.757 "assigned_rate_limits": { 00:09:16.757 "rw_ios_per_sec": 0, 00:09:16.757 "rw_mbytes_per_sec": 0, 00:09:16.757 "r_mbytes_per_sec": 0, 00:09:16.757 "w_mbytes_per_sec": 0 00:09:16.757 }, 00:09:16.757 "claimed": false, 00:09:16.757 "zoned": false, 00:09:16.757 "supported_io_types": { 00:09:16.757 "read": true, 00:09:16.757 "write": true, 00:09:16.758 "unmap": true, 00:09:16.758 "flush": true, 00:09:16.758 "reset": true, 00:09:16.758 "nvme_admin": false, 00:09:16.758 "nvme_io": false, 00:09:16.758 "nvme_io_md": false, 00:09:16.758 "write_zeroes": true, 00:09:16.758 "zcopy": false, 00:09:16.758 "get_zone_info": false, 00:09:16.758 "zone_management": false, 00:09:16.758 "zone_append": false, 00:09:16.758 "compare": false, 00:09:16.758 "compare_and_write": false, 00:09:16.758 "abort": false, 00:09:16.758 "seek_hole": false, 00:09:16.758 "seek_data": false, 00:09:16.758 "copy": false, 00:09:16.758 "nvme_iov_md": false 00:09:16.758 }, 00:09:16.758 "memory_domains": [ 00:09:16.758 { 00:09:16.758 "dma_device_id": "system", 00:09:16.758 "dma_device_type": 1 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.758 "dma_device_type": 2 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "dma_device_id": "system", 00:09:16.758 "dma_device_type": 1 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.758 "dma_device_type": 2 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "dma_device_id": "system", 00:09:16.758 "dma_device_type": 1 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.758 "dma_device_type": 2 00:09:16.758 } 00:09:16.758 ], 00:09:16.758 "driver_specific": { 00:09:16.758 "raid": { 00:09:16.758 "uuid": "ead3dd24-9670-4015-9695-7e1d3d8e00eb", 00:09:16.758 "strip_size_kb": 64, 00:09:16.758 "state": "online", 00:09:16.758 "raid_level": "raid0", 00:09:16.758 "superblock": true, 00:09:16.758 "num_base_bdevs": 3, 00:09:16.758 "num_base_bdevs_discovered": 3, 00:09:16.758 "num_base_bdevs_operational": 3, 00:09:16.758 "base_bdevs_list": [ 00:09:16.758 { 00:09:16.758 "name": "BaseBdev1", 00:09:16.758 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:16.758 "is_configured": true, 00:09:16.758 "data_offset": 2048, 00:09:16.758 "data_size": 63488 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "name": "BaseBdev2", 00:09:16.758 "uuid": "d692f1e2-27b0-4ef2-9edd-859c209042d0", 00:09:16.758 "is_configured": true, 00:09:16.758 "data_offset": 2048, 00:09:16.758 "data_size": 63488 00:09:16.758 }, 00:09:16.758 { 00:09:16.758 "name": "BaseBdev3", 00:09:16.758 "uuid": "9326a92c-bab0-47bf-9397-9de2d5c3784a", 00:09:16.758 "is_configured": true, 00:09:16.758 "data_offset": 2048, 00:09:16.758 "data_size": 63488 00:09:16.758 } 00:09:16.758 ] 00:09:16.758 } 00:09:16.758 } 00:09:16.758 }' 00:09:16.758 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.758 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:16.758 BaseBdev2 00:09:16.758 BaseBdev3' 00:09:16.758 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:16.758 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:16.758 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:17.017 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:17.017 "name": "BaseBdev1", 00:09:17.017 "aliases": [ 00:09:17.017 "e30428df-4158-438a-bb54-17aee3d5ad42" 00:09:17.017 ], 00:09:17.017 "product_name": "Malloc disk", 00:09:17.017 "block_size": 512, 00:09:17.017 "num_blocks": 65536, 00:09:17.017 "uuid": "e30428df-4158-438a-bb54-17aee3d5ad42", 00:09:17.017 "assigned_rate_limits": { 00:09:17.017 "rw_ios_per_sec": 0, 00:09:17.017 "rw_mbytes_per_sec": 0, 00:09:17.017 "r_mbytes_per_sec": 0, 00:09:17.017 "w_mbytes_per_sec": 0 00:09:17.017 }, 00:09:17.017 "claimed": true, 00:09:17.017 "claim_type": "exclusive_write", 00:09:17.017 "zoned": false, 00:09:17.017 "supported_io_types": { 00:09:17.017 "read": true, 00:09:17.017 "write": true, 00:09:17.017 "unmap": true, 00:09:17.017 "flush": true, 00:09:17.017 "reset": true, 00:09:17.017 "nvme_admin": false, 00:09:17.017 "nvme_io": false, 00:09:17.017 "nvme_io_md": false, 00:09:17.017 "write_zeroes": true, 00:09:17.017 "zcopy": true, 00:09:17.017 "get_zone_info": false, 00:09:17.017 "zone_management": false, 00:09:17.017 "zone_append": false, 00:09:17.017 "compare": false, 00:09:17.017 "compare_and_write": false, 00:09:17.017 "abort": true, 00:09:17.017 "seek_hole": false, 00:09:17.017 "seek_data": false, 00:09:17.017 "copy": true, 00:09:17.017 "nvme_iov_md": false 00:09:17.017 }, 00:09:17.017 "memory_domains": [ 00:09:17.017 { 00:09:17.017 "dma_device_id": "system", 00:09:17.017 "dma_device_type": 1 00:09:17.017 }, 00:09:17.017 { 00:09:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.017 "dma_device_type": 2 00:09:17.017 } 00:09:17.017 ], 00:09:17.017 "driver_specific": {} 00:09:17.017 }' 00:09:17.017 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:17.017 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:17.017 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:17.017 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:17.275 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:17.275 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:17.275 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:17.275 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:17.275 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:17.275 06:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:17.275 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:17.534 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:17.534 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:17.534 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:17.534 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:17.534 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:17.534 "name": "BaseBdev2", 00:09:17.534 "aliases": [ 00:09:17.534 "d692f1e2-27b0-4ef2-9edd-859c209042d0" 00:09:17.534 ], 00:09:17.534 "product_name": "Malloc disk", 00:09:17.534 "block_size": 512, 00:09:17.534 "num_blocks": 65536, 00:09:17.534 "uuid": "d692f1e2-27b0-4ef2-9edd-859c209042d0", 00:09:17.534 "assigned_rate_limits": { 00:09:17.534 "rw_ios_per_sec": 0, 00:09:17.534 "rw_mbytes_per_sec": 0, 00:09:17.534 "r_mbytes_per_sec": 0, 00:09:17.534 "w_mbytes_per_sec": 0 00:09:17.534 }, 00:09:17.534 "claimed": true, 00:09:17.534 "claim_type": "exclusive_write", 00:09:17.534 "zoned": false, 00:09:17.534 "supported_io_types": { 00:09:17.534 "read": true, 00:09:17.534 "write": true, 00:09:17.534 "unmap": true, 00:09:17.534 "flush": true, 00:09:17.534 "reset": true, 00:09:17.534 "nvme_admin": false, 00:09:17.534 "nvme_io": false, 00:09:17.534 "nvme_io_md": false, 00:09:17.534 "write_zeroes": true, 00:09:17.534 "zcopy": true, 00:09:17.534 "get_zone_info": false, 00:09:17.534 "zone_management": false, 00:09:17.534 "zone_append": false, 00:09:17.534 "compare": false, 00:09:17.534 "compare_and_write": false, 00:09:17.534 "abort": true, 00:09:17.534 "seek_hole": false, 00:09:17.534 "seek_data": false, 00:09:17.534 "copy": true, 00:09:17.534 "nvme_iov_md": false 00:09:17.534 }, 00:09:17.534 "memory_domains": [ 00:09:17.534 { 00:09:17.534 "dma_device_id": "system", 00:09:17.534 "dma_device_type": 1 00:09:17.534 }, 00:09:17.534 { 00:09:17.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.534 "dma_device_type": 2 00:09:17.534 } 00:09:17.534 ], 00:09:17.534 "driver_specific": {} 00:09:17.534 }' 00:09:17.534 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:17.793 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.051 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.051 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:18.051 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:18.052 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:18.052 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:18.310 "name": "BaseBdev3", 00:09:18.310 "aliases": [ 00:09:18.310 "9326a92c-bab0-47bf-9397-9de2d5c3784a" 00:09:18.310 ], 00:09:18.310 "product_name": "Malloc disk", 00:09:18.310 "block_size": 512, 00:09:18.310 "num_blocks": 65536, 00:09:18.310 "uuid": "9326a92c-bab0-47bf-9397-9de2d5c3784a", 00:09:18.310 "assigned_rate_limits": { 00:09:18.310 "rw_ios_per_sec": 0, 00:09:18.310 "rw_mbytes_per_sec": 0, 00:09:18.310 "r_mbytes_per_sec": 0, 00:09:18.310 "w_mbytes_per_sec": 0 00:09:18.310 }, 00:09:18.310 "claimed": true, 00:09:18.310 "claim_type": "exclusive_write", 00:09:18.310 "zoned": false, 00:09:18.310 "supported_io_types": { 00:09:18.310 "read": true, 00:09:18.310 "write": true, 00:09:18.310 "unmap": true, 00:09:18.310 "flush": true, 00:09:18.310 "reset": true, 00:09:18.310 "nvme_admin": false, 00:09:18.310 "nvme_io": false, 00:09:18.310 "nvme_io_md": false, 00:09:18.310 "write_zeroes": true, 00:09:18.310 "zcopy": true, 00:09:18.310 "get_zone_info": false, 00:09:18.310 "zone_management": false, 00:09:18.310 "zone_append": false, 00:09:18.310 "compare": false, 00:09:18.310 "compare_and_write": false, 00:09:18.310 "abort": true, 00:09:18.310 "seek_hole": false, 00:09:18.310 "seek_data": false, 00:09:18.310 "copy": true, 00:09:18.310 "nvme_iov_md": false 00:09:18.310 }, 00:09:18.310 "memory_domains": [ 00:09:18.310 { 00:09:18.310 "dma_device_id": "system", 00:09:18.310 "dma_device_type": 1 00:09:18.310 }, 00:09:18.310 { 00:09:18.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.310 "dma_device_type": 2 00:09:18.310 } 00:09:18.310 ], 00:09:18.310 "driver_specific": {} 00:09:18.310 }' 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:18.310 06:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.310 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.310 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:18.310 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.569 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.569 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:18.569 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:18.827 [2024-08-13 06:04:20.364024] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.828 [2024-08-13 06:04:20.364067] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.828 [2024-08-13 06:04:20.364125] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.828 "name": "Existed_Raid", 00:09:18.828 "uuid": "ead3dd24-9670-4015-9695-7e1d3d8e00eb", 00:09:18.828 "strip_size_kb": 64, 00:09:18.828 "state": "offline", 00:09:18.828 "raid_level": "raid0", 00:09:18.828 "superblock": true, 00:09:18.828 "num_base_bdevs": 3, 00:09:18.828 "num_base_bdevs_discovered": 2, 00:09:18.828 "num_base_bdevs_operational": 2, 00:09:18.828 "base_bdevs_list": [ 00:09:18.828 { 00:09:18.828 "name": null, 00:09:18.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.828 "is_configured": false, 00:09:18.828 "data_offset": 2048, 00:09:18.828 "data_size": 63488 00:09:18.828 }, 00:09:18.828 { 00:09:18.828 "name": "BaseBdev2", 00:09:18.828 "uuid": "d692f1e2-27b0-4ef2-9edd-859c209042d0", 00:09:18.828 "is_configured": true, 00:09:18.828 "data_offset": 2048, 00:09:18.828 "data_size": 63488 00:09:18.828 }, 00:09:18.828 { 00:09:18.828 "name": "BaseBdev3", 00:09:18.828 "uuid": "9326a92c-bab0-47bf-9397-9de2d5c3784a", 00:09:18.828 "is_configured": true, 00:09:18.828 "data_offset": 2048, 00:09:18.828 "data_size": 63488 00:09:18.828 } 00:09:18.828 ] 00:09:18.828 }' 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.828 06:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:19.394 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:19.394 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.394 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:19.652 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:19.652 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:19.652 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:19.910 [2024-08-13 06:04:21.553476] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.910 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:19.910 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:19.910 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:19.910 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.169 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:20.169 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.169 06:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:20.428 [2024-08-13 06:04:21.980015] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.428 [2024-08-13 06:04:21.980089] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:20.428 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:20.428 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:20.428 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.428 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.696 BaseBdev2 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:20.696 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:20.971 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.230 [ 00:09:21.230 { 00:09:21.230 "name": "BaseBdev2", 00:09:21.230 "aliases": [ 00:09:21.230 "53a6b781-5d49-4537-a156-4e1ec787efd6" 00:09:21.230 ], 00:09:21.230 "product_name": "Malloc disk", 00:09:21.230 "block_size": 512, 00:09:21.230 "num_blocks": 65536, 00:09:21.230 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:21.230 "assigned_rate_limits": { 00:09:21.230 "rw_ios_per_sec": 0, 00:09:21.230 "rw_mbytes_per_sec": 0, 00:09:21.230 "r_mbytes_per_sec": 0, 00:09:21.230 "w_mbytes_per_sec": 0 00:09:21.230 }, 00:09:21.230 "claimed": false, 00:09:21.230 "zoned": false, 00:09:21.230 "supported_io_types": { 00:09:21.230 "read": true, 00:09:21.230 "write": true, 00:09:21.230 "unmap": true, 00:09:21.230 "flush": true, 00:09:21.230 "reset": true, 00:09:21.230 "nvme_admin": false, 00:09:21.230 "nvme_io": false, 00:09:21.230 "nvme_io_md": false, 00:09:21.230 "write_zeroes": true, 00:09:21.230 "zcopy": true, 00:09:21.230 "get_zone_info": false, 00:09:21.230 "zone_management": false, 00:09:21.230 "zone_append": false, 00:09:21.230 "compare": false, 00:09:21.230 "compare_and_write": false, 00:09:21.230 "abort": true, 00:09:21.230 "seek_hole": false, 00:09:21.230 "seek_data": false, 00:09:21.230 "copy": true, 00:09:21.230 "nvme_iov_md": false 00:09:21.230 }, 00:09:21.230 "memory_domains": [ 00:09:21.230 { 00:09:21.230 "dma_device_id": "system", 00:09:21.230 "dma_device_type": 1 00:09:21.230 }, 00:09:21.230 { 00:09:21.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.230 "dma_device_type": 2 00:09:21.230 } 00:09:21.230 ], 00:09:21.230 "driver_specific": {} 00:09:21.230 } 00:09:21.230 ] 00:09:21.230 06:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:21.230 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:21.230 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:21.230 06:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.490 BaseBdev3 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:21.490 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:21.750 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.750 [ 00:09:21.750 { 00:09:21.750 "name": "BaseBdev3", 00:09:21.750 "aliases": [ 00:09:21.750 "f173d51f-d215-4fb7-8458-3eb75ecb9bbc" 00:09:21.750 ], 00:09:21.750 "product_name": "Malloc disk", 00:09:21.750 "block_size": 512, 00:09:21.750 "num_blocks": 65536, 00:09:21.750 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:21.750 "assigned_rate_limits": { 00:09:21.750 "rw_ios_per_sec": 0, 00:09:21.750 "rw_mbytes_per_sec": 0, 00:09:21.750 "r_mbytes_per_sec": 0, 00:09:21.750 "w_mbytes_per_sec": 0 00:09:21.750 }, 00:09:21.750 "claimed": false, 00:09:21.750 "zoned": false, 00:09:21.750 "supported_io_types": { 00:09:21.750 "read": true, 00:09:21.750 "write": true, 00:09:21.750 "unmap": true, 00:09:21.750 "flush": true, 00:09:21.750 "reset": true, 00:09:21.750 "nvme_admin": false, 00:09:21.750 "nvme_io": false, 00:09:21.750 "nvme_io_md": false, 00:09:21.750 "write_zeroes": true, 00:09:21.750 "zcopy": true, 00:09:21.750 "get_zone_info": false, 00:09:21.750 "zone_management": false, 00:09:21.750 "zone_append": false, 00:09:21.750 "compare": false, 00:09:21.750 "compare_and_write": false, 00:09:21.750 "abort": true, 00:09:21.750 "seek_hole": false, 00:09:21.750 "seek_data": false, 00:09:21.750 "copy": true, 00:09:21.750 "nvme_iov_md": false 00:09:21.750 }, 00:09:21.750 "memory_domains": [ 00:09:21.750 { 00:09:21.750 "dma_device_id": "system", 00:09:21.750 "dma_device_type": 1 00:09:21.750 }, 00:09:21.750 { 00:09:21.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.750 "dma_device_type": 2 00:09:21.750 } 00:09:21.750 ], 00:09:21.750 "driver_specific": {} 00:09:21.750 } 00:09:21.750 ] 00:09:21.750 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:21.750 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:21.750 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:21.750 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:22.009 [2024-08-13 06:04:23.634410] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.009 [2024-08-13 06:04:23.634467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.009 [2024-08-13 06:04:23.634490] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.009 [2024-08-13 06:04:23.636317] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.009 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.269 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:22.269 "name": "Existed_Raid", 00:09:22.269 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:22.269 "strip_size_kb": 64, 00:09:22.269 "state": "configuring", 00:09:22.269 "raid_level": "raid0", 00:09:22.269 "superblock": true, 00:09:22.269 "num_base_bdevs": 3, 00:09:22.269 "num_base_bdevs_discovered": 2, 00:09:22.269 "num_base_bdevs_operational": 3, 00:09:22.269 "base_bdevs_list": [ 00:09:22.269 { 00:09:22.269 "name": "BaseBdev1", 00:09:22.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.269 "is_configured": false, 00:09:22.269 "data_offset": 0, 00:09:22.269 "data_size": 0 00:09:22.269 }, 00:09:22.269 { 00:09:22.269 "name": "BaseBdev2", 00:09:22.269 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:22.269 "is_configured": true, 00:09:22.269 "data_offset": 2048, 00:09:22.269 "data_size": 63488 00:09:22.269 }, 00:09:22.269 { 00:09:22.269 "name": "BaseBdev3", 00:09:22.269 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:22.269 "is_configured": true, 00:09:22.269 "data_offset": 2048, 00:09:22.269 "data_size": 63488 00:09:22.269 } 00:09:22.269 ] 00:09:22.269 }' 00:09:22.269 06:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:22.269 06:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:22.839 [2024-08-13 06:04:24.572785] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.839 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.098 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:23.098 "name": "Existed_Raid", 00:09:23.098 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:23.098 "strip_size_kb": 64, 00:09:23.098 "state": "configuring", 00:09:23.099 "raid_level": "raid0", 00:09:23.099 "superblock": true, 00:09:23.099 "num_base_bdevs": 3, 00:09:23.099 "num_base_bdevs_discovered": 1, 00:09:23.099 "num_base_bdevs_operational": 3, 00:09:23.099 "base_bdevs_list": [ 00:09:23.099 { 00:09:23.099 "name": "BaseBdev1", 00:09:23.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.099 "is_configured": false, 00:09:23.099 "data_offset": 0, 00:09:23.099 "data_size": 0 00:09:23.099 }, 00:09:23.099 { 00:09:23.099 "name": null, 00:09:23.099 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:23.099 "is_configured": false, 00:09:23.099 "data_offset": 2048, 00:09:23.099 "data_size": 63488 00:09:23.099 }, 00:09:23.099 { 00:09:23.099 "name": "BaseBdev3", 00:09:23.099 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:23.099 "is_configured": true, 00:09:23.099 "data_offset": 2048, 00:09:23.099 "data_size": 63488 00:09:23.099 } 00:09:23.099 ] 00:09:23.099 }' 00:09:23.099 06:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:23.099 06:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.668 06:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.668 06:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.927 06:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:23.927 06:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.186 [2024-08-13 06:04:25.769705] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.186 BaseBdev1 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:24.186 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:24.446 06:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.446 [ 00:09:24.446 { 00:09:24.446 "name": "BaseBdev1", 00:09:24.446 "aliases": [ 00:09:24.446 "5d1b9b52-c594-4838-a629-30b6f9c23def" 00:09:24.446 ], 00:09:24.446 "product_name": "Malloc disk", 00:09:24.446 "block_size": 512, 00:09:24.446 "num_blocks": 65536, 00:09:24.446 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:24.446 "assigned_rate_limits": { 00:09:24.446 "rw_ios_per_sec": 0, 00:09:24.446 "rw_mbytes_per_sec": 0, 00:09:24.446 "r_mbytes_per_sec": 0, 00:09:24.446 "w_mbytes_per_sec": 0 00:09:24.446 }, 00:09:24.446 "claimed": true, 00:09:24.446 "claim_type": "exclusive_write", 00:09:24.446 "zoned": false, 00:09:24.446 "supported_io_types": { 00:09:24.446 "read": true, 00:09:24.446 "write": true, 00:09:24.446 "unmap": true, 00:09:24.446 "flush": true, 00:09:24.446 "reset": true, 00:09:24.446 "nvme_admin": false, 00:09:24.446 "nvme_io": false, 00:09:24.446 "nvme_io_md": false, 00:09:24.446 "write_zeroes": true, 00:09:24.446 "zcopy": true, 00:09:24.446 "get_zone_info": false, 00:09:24.446 "zone_management": false, 00:09:24.446 "zone_append": false, 00:09:24.446 "compare": false, 00:09:24.446 "compare_and_write": false, 00:09:24.446 "abort": true, 00:09:24.446 "seek_hole": false, 00:09:24.446 "seek_data": false, 00:09:24.446 "copy": true, 00:09:24.446 "nvme_iov_md": false 00:09:24.446 }, 00:09:24.446 "memory_domains": [ 00:09:24.446 { 00:09:24.446 "dma_device_id": "system", 00:09:24.447 "dma_device_type": 1 00:09:24.447 }, 00:09:24.447 { 00:09:24.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.447 "dma_device_type": 2 00:09:24.447 } 00:09:24.447 ], 00:09:24.447 "driver_specific": {} 00:09:24.447 } 00:09:24.447 ] 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.447 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.706 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:24.706 "name": "Existed_Raid", 00:09:24.706 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:24.706 "strip_size_kb": 64, 00:09:24.706 "state": "configuring", 00:09:24.706 "raid_level": "raid0", 00:09:24.706 "superblock": true, 00:09:24.706 "num_base_bdevs": 3, 00:09:24.706 "num_base_bdevs_discovered": 2, 00:09:24.706 "num_base_bdevs_operational": 3, 00:09:24.706 "base_bdevs_list": [ 00:09:24.706 { 00:09:24.706 "name": "BaseBdev1", 00:09:24.706 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:24.706 "is_configured": true, 00:09:24.706 "data_offset": 2048, 00:09:24.706 "data_size": 63488 00:09:24.706 }, 00:09:24.706 { 00:09:24.706 "name": null, 00:09:24.706 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:24.706 "is_configured": false, 00:09:24.706 "data_offset": 2048, 00:09:24.706 "data_size": 63488 00:09:24.706 }, 00:09:24.706 { 00:09:24.706 "name": "BaseBdev3", 00:09:24.706 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:24.706 "is_configured": true, 00:09:24.706 "data_offset": 2048, 00:09:24.706 "data_size": 63488 00:09:24.706 } 00:09:24.706 ] 00:09:24.706 }' 00:09:24.706 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:24.706 06:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.275 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.276 06:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.535 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:25.535 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:25.795 [2024-08-13 06:04:27.367078] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.795 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.055 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:26.055 "name": "Existed_Raid", 00:09:26.055 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:26.055 "strip_size_kb": 64, 00:09:26.055 "state": "configuring", 00:09:26.055 "raid_level": "raid0", 00:09:26.055 "superblock": true, 00:09:26.055 "num_base_bdevs": 3, 00:09:26.055 "num_base_bdevs_discovered": 1, 00:09:26.055 "num_base_bdevs_operational": 3, 00:09:26.055 "base_bdevs_list": [ 00:09:26.055 { 00:09:26.055 "name": "BaseBdev1", 00:09:26.055 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:26.055 "is_configured": true, 00:09:26.055 "data_offset": 2048, 00:09:26.055 "data_size": 63488 00:09:26.055 }, 00:09:26.055 { 00:09:26.055 "name": null, 00:09:26.055 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:26.055 "is_configured": false, 00:09:26.055 "data_offset": 2048, 00:09:26.055 "data_size": 63488 00:09:26.055 }, 00:09:26.055 { 00:09:26.055 "name": null, 00:09:26.055 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:26.055 "is_configured": false, 00:09:26.055 "data_offset": 2048, 00:09:26.055 "data_size": 63488 00:09:26.055 } 00:09:26.055 ] 00:09:26.055 }' 00:09:26.055 06:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:26.055 06:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.315 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.315 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.574 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:26.574 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:26.834 [2024-08-13 06:04:28.443602] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.834 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.094 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.094 "name": "Existed_Raid", 00:09:27.094 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:27.094 "strip_size_kb": 64, 00:09:27.094 "state": "configuring", 00:09:27.094 "raid_level": "raid0", 00:09:27.094 "superblock": true, 00:09:27.094 "num_base_bdevs": 3, 00:09:27.094 "num_base_bdevs_discovered": 2, 00:09:27.094 "num_base_bdevs_operational": 3, 00:09:27.094 "base_bdevs_list": [ 00:09:27.094 { 00:09:27.094 "name": "BaseBdev1", 00:09:27.094 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:27.094 "is_configured": true, 00:09:27.094 "data_offset": 2048, 00:09:27.094 "data_size": 63488 00:09:27.094 }, 00:09:27.094 { 00:09:27.094 "name": null, 00:09:27.094 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:27.094 "is_configured": false, 00:09:27.094 "data_offset": 2048, 00:09:27.094 "data_size": 63488 00:09:27.094 }, 00:09:27.094 { 00:09:27.094 "name": "BaseBdev3", 00:09:27.094 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:27.094 "is_configured": true, 00:09:27.094 "data_offset": 2048, 00:09:27.094 "data_size": 63488 00:09:27.094 } 00:09:27.094 ] 00:09:27.094 }' 00:09:27.094 06:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.094 06:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.663 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.663 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.663 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:27.663 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:27.923 [2024-08-13 06:04:29.589713] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.923 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.183 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.183 "name": "Existed_Raid", 00:09:28.183 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:28.183 "strip_size_kb": 64, 00:09:28.183 "state": "configuring", 00:09:28.183 "raid_level": "raid0", 00:09:28.183 "superblock": true, 00:09:28.183 "num_base_bdevs": 3, 00:09:28.183 "num_base_bdevs_discovered": 1, 00:09:28.183 "num_base_bdevs_operational": 3, 00:09:28.183 "base_bdevs_list": [ 00:09:28.183 { 00:09:28.183 "name": null, 00:09:28.183 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:28.183 "is_configured": false, 00:09:28.183 "data_offset": 2048, 00:09:28.183 "data_size": 63488 00:09:28.183 }, 00:09:28.183 { 00:09:28.183 "name": null, 00:09:28.183 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:28.183 "is_configured": false, 00:09:28.183 "data_offset": 2048, 00:09:28.183 "data_size": 63488 00:09:28.183 }, 00:09:28.183 { 00:09:28.183 "name": "BaseBdev3", 00:09:28.183 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:28.183 "is_configured": true, 00:09:28.183 "data_offset": 2048, 00:09:28.183 "data_size": 63488 00:09:28.183 } 00:09:28.183 ] 00:09:28.183 }' 00:09:28.183 06:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.183 06:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.752 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.752 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:29.012 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:29.012 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:29.012 [2024-08-13 06:04:30.794290] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.272 06:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.272 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:29.272 "name": "Existed_Raid", 00:09:29.272 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:29.272 "strip_size_kb": 64, 00:09:29.272 "state": "configuring", 00:09:29.272 "raid_level": "raid0", 00:09:29.272 "superblock": true, 00:09:29.272 "num_base_bdevs": 3, 00:09:29.272 "num_base_bdevs_discovered": 2, 00:09:29.272 "num_base_bdevs_operational": 3, 00:09:29.272 "base_bdevs_list": [ 00:09:29.272 { 00:09:29.272 "name": null, 00:09:29.272 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:29.272 "is_configured": false, 00:09:29.272 "data_offset": 2048, 00:09:29.272 "data_size": 63488 00:09:29.272 }, 00:09:29.272 { 00:09:29.272 "name": "BaseBdev2", 00:09:29.272 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:29.272 "is_configured": true, 00:09:29.272 "data_offset": 2048, 00:09:29.272 "data_size": 63488 00:09:29.272 }, 00:09:29.272 { 00:09:29.272 "name": "BaseBdev3", 00:09:29.272 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:29.272 "is_configured": true, 00:09:29.272 "data_offset": 2048, 00:09:29.272 "data_size": 63488 00:09:29.272 } 00:09:29.272 ] 00:09:29.272 }' 00:09:29.272 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:29.272 06:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.215 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.215 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.215 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:30.215 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.215 06:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:30.475 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5d1b9b52-c594-4838-a629-30b6f9c23def 00:09:30.475 [2024-08-13 06:04:32.218903] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:30.475 [2024-08-13 06:04:32.219157] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:30.475 [2024-08-13 06:04:32.219213] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.475 [2024-08-13 06:04:32.219474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:30.475 [2024-08-13 06:04:32.219622] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:30.475 [2024-08-13 06:04:32.219669] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:30.475 [2024-08-13 06:04:32.219802] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.475 NewBaseBdev 00:09:30.475 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:30.476 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:09:30.476 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:30.476 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:30.476 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:30.476 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:30.476 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:30.735 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:31.062 [ 00:09:31.062 { 00:09:31.062 "name": "NewBaseBdev", 00:09:31.062 "aliases": [ 00:09:31.062 "5d1b9b52-c594-4838-a629-30b6f9c23def" 00:09:31.062 ], 00:09:31.062 "product_name": "Malloc disk", 00:09:31.062 "block_size": 512, 00:09:31.062 "num_blocks": 65536, 00:09:31.062 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:31.062 "assigned_rate_limits": { 00:09:31.062 "rw_ios_per_sec": 0, 00:09:31.062 "rw_mbytes_per_sec": 0, 00:09:31.062 "r_mbytes_per_sec": 0, 00:09:31.062 "w_mbytes_per_sec": 0 00:09:31.062 }, 00:09:31.062 "claimed": true, 00:09:31.062 "claim_type": "exclusive_write", 00:09:31.062 "zoned": false, 00:09:31.062 "supported_io_types": { 00:09:31.062 "read": true, 00:09:31.062 "write": true, 00:09:31.062 "unmap": true, 00:09:31.062 "flush": true, 00:09:31.062 "reset": true, 00:09:31.062 "nvme_admin": false, 00:09:31.062 "nvme_io": false, 00:09:31.062 "nvme_io_md": false, 00:09:31.062 "write_zeroes": true, 00:09:31.062 "zcopy": true, 00:09:31.062 "get_zone_info": false, 00:09:31.062 "zone_management": false, 00:09:31.062 "zone_append": false, 00:09:31.062 "compare": false, 00:09:31.062 "compare_and_write": false, 00:09:31.062 "abort": true, 00:09:31.062 "seek_hole": false, 00:09:31.062 "seek_data": false, 00:09:31.062 "copy": true, 00:09:31.062 "nvme_iov_md": false 00:09:31.062 }, 00:09:31.062 "memory_domains": [ 00:09:31.062 { 00:09:31.062 "dma_device_id": "system", 00:09:31.062 "dma_device_type": 1 00:09:31.062 }, 00:09:31.062 { 00:09:31.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.062 "dma_device_type": 2 00:09:31.062 } 00:09:31.062 ], 00:09:31.062 "driver_specific": {} 00:09:31.062 } 00:09:31.062 ] 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:31.062 "name": "Existed_Raid", 00:09:31.062 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:31.062 "strip_size_kb": 64, 00:09:31.062 "state": "online", 00:09:31.062 "raid_level": "raid0", 00:09:31.062 "superblock": true, 00:09:31.062 "num_base_bdevs": 3, 00:09:31.062 "num_base_bdevs_discovered": 3, 00:09:31.062 "num_base_bdevs_operational": 3, 00:09:31.062 "base_bdevs_list": [ 00:09:31.062 { 00:09:31.062 "name": "NewBaseBdev", 00:09:31.062 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:31.062 "is_configured": true, 00:09:31.062 "data_offset": 2048, 00:09:31.062 "data_size": 63488 00:09:31.062 }, 00:09:31.062 { 00:09:31.062 "name": "BaseBdev2", 00:09:31.062 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:31.062 "is_configured": true, 00:09:31.062 "data_offset": 2048, 00:09:31.062 "data_size": 63488 00:09:31.062 }, 00:09:31.062 { 00:09:31.062 "name": "BaseBdev3", 00:09:31.062 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:31.062 "is_configured": true, 00:09:31.062 "data_offset": 2048, 00:09:31.062 "data_size": 63488 00:09:31.062 } 00:09:31.062 ] 00:09:31.062 }' 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:31.062 06:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:31.634 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:31.894 [2024-08-13 06:04:33.541075] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.894 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:31.894 "name": "Existed_Raid", 00:09:31.894 "aliases": [ 00:09:31.894 "edce168a-40bd-409f-8825-2522a4940d63" 00:09:31.894 ], 00:09:31.894 "product_name": "Raid Volume", 00:09:31.894 "block_size": 512, 00:09:31.894 "num_blocks": 190464, 00:09:31.894 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:31.894 "assigned_rate_limits": { 00:09:31.894 "rw_ios_per_sec": 0, 00:09:31.894 "rw_mbytes_per_sec": 0, 00:09:31.894 "r_mbytes_per_sec": 0, 00:09:31.894 "w_mbytes_per_sec": 0 00:09:31.894 }, 00:09:31.894 "claimed": false, 00:09:31.894 "zoned": false, 00:09:31.894 "supported_io_types": { 00:09:31.894 "read": true, 00:09:31.894 "write": true, 00:09:31.894 "unmap": true, 00:09:31.894 "flush": true, 00:09:31.894 "reset": true, 00:09:31.894 "nvme_admin": false, 00:09:31.894 "nvme_io": false, 00:09:31.894 "nvme_io_md": false, 00:09:31.894 "write_zeroes": true, 00:09:31.894 "zcopy": false, 00:09:31.894 "get_zone_info": false, 00:09:31.894 "zone_management": false, 00:09:31.894 "zone_append": false, 00:09:31.894 "compare": false, 00:09:31.894 "compare_and_write": false, 00:09:31.894 "abort": false, 00:09:31.894 "seek_hole": false, 00:09:31.894 "seek_data": false, 00:09:31.894 "copy": false, 00:09:31.894 "nvme_iov_md": false 00:09:31.894 }, 00:09:31.894 "memory_domains": [ 00:09:31.894 { 00:09:31.894 "dma_device_id": "system", 00:09:31.894 "dma_device_type": 1 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.894 "dma_device_type": 2 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "dma_device_id": "system", 00:09:31.894 "dma_device_type": 1 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.894 "dma_device_type": 2 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "dma_device_id": "system", 00:09:31.894 "dma_device_type": 1 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.894 "dma_device_type": 2 00:09:31.894 } 00:09:31.894 ], 00:09:31.894 "driver_specific": { 00:09:31.894 "raid": { 00:09:31.894 "uuid": "edce168a-40bd-409f-8825-2522a4940d63", 00:09:31.894 "strip_size_kb": 64, 00:09:31.894 "state": "online", 00:09:31.894 "raid_level": "raid0", 00:09:31.894 "superblock": true, 00:09:31.894 "num_base_bdevs": 3, 00:09:31.894 "num_base_bdevs_discovered": 3, 00:09:31.894 "num_base_bdevs_operational": 3, 00:09:31.894 "base_bdevs_list": [ 00:09:31.894 { 00:09:31.894 "name": "NewBaseBdev", 00:09:31.894 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:31.894 "is_configured": true, 00:09:31.894 "data_offset": 2048, 00:09:31.894 "data_size": 63488 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "name": "BaseBdev2", 00:09:31.894 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:31.894 "is_configured": true, 00:09:31.894 "data_offset": 2048, 00:09:31.894 "data_size": 63488 00:09:31.894 }, 00:09:31.894 { 00:09:31.894 "name": "BaseBdev3", 00:09:31.894 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:31.894 "is_configured": true, 00:09:31.894 "data_offset": 2048, 00:09:31.894 "data_size": 63488 00:09:31.894 } 00:09:31.894 ] 00:09:31.894 } 00:09:31.894 } 00:09:31.894 }' 00:09:31.894 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.894 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:31.894 BaseBdev2 00:09:31.894 BaseBdev3' 00:09:31.894 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:31.894 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:31.895 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:32.154 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:32.155 "name": "NewBaseBdev", 00:09:32.155 "aliases": [ 00:09:32.155 "5d1b9b52-c594-4838-a629-30b6f9c23def" 00:09:32.155 ], 00:09:32.155 "product_name": "Malloc disk", 00:09:32.155 "block_size": 512, 00:09:32.155 "num_blocks": 65536, 00:09:32.155 "uuid": "5d1b9b52-c594-4838-a629-30b6f9c23def", 00:09:32.155 "assigned_rate_limits": { 00:09:32.155 "rw_ios_per_sec": 0, 00:09:32.155 "rw_mbytes_per_sec": 0, 00:09:32.155 "r_mbytes_per_sec": 0, 00:09:32.155 "w_mbytes_per_sec": 0 00:09:32.155 }, 00:09:32.155 "claimed": true, 00:09:32.155 "claim_type": "exclusive_write", 00:09:32.155 "zoned": false, 00:09:32.155 "supported_io_types": { 00:09:32.155 "read": true, 00:09:32.155 "write": true, 00:09:32.155 "unmap": true, 00:09:32.155 "flush": true, 00:09:32.155 "reset": true, 00:09:32.155 "nvme_admin": false, 00:09:32.155 "nvme_io": false, 00:09:32.155 "nvme_io_md": false, 00:09:32.155 "write_zeroes": true, 00:09:32.155 "zcopy": true, 00:09:32.155 "get_zone_info": false, 00:09:32.155 "zone_management": false, 00:09:32.155 "zone_append": false, 00:09:32.155 "compare": false, 00:09:32.155 "compare_and_write": false, 00:09:32.155 "abort": true, 00:09:32.155 "seek_hole": false, 00:09:32.155 "seek_data": false, 00:09:32.155 "copy": true, 00:09:32.155 "nvme_iov_md": false 00:09:32.155 }, 00:09:32.155 "memory_domains": [ 00:09:32.155 { 00:09:32.155 "dma_device_id": "system", 00:09:32.155 "dma_device_type": 1 00:09:32.155 }, 00:09:32.155 { 00:09:32.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.155 "dma_device_type": 2 00:09:32.155 } 00:09:32.155 ], 00:09:32.155 "driver_specific": {} 00:09:32.155 }' 00:09:32.155 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:32.155 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:32.155 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:32.155 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:32.155 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:32.414 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:32.414 06:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:32.414 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:32.673 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:32.673 "name": "BaseBdev2", 00:09:32.673 "aliases": [ 00:09:32.673 "53a6b781-5d49-4537-a156-4e1ec787efd6" 00:09:32.673 ], 00:09:32.673 "product_name": "Malloc disk", 00:09:32.673 "block_size": 512, 00:09:32.673 "num_blocks": 65536, 00:09:32.673 "uuid": "53a6b781-5d49-4537-a156-4e1ec787efd6", 00:09:32.673 "assigned_rate_limits": { 00:09:32.673 "rw_ios_per_sec": 0, 00:09:32.673 "rw_mbytes_per_sec": 0, 00:09:32.673 "r_mbytes_per_sec": 0, 00:09:32.673 "w_mbytes_per_sec": 0 00:09:32.673 }, 00:09:32.673 "claimed": true, 00:09:32.673 "claim_type": "exclusive_write", 00:09:32.673 "zoned": false, 00:09:32.673 "supported_io_types": { 00:09:32.673 "read": true, 00:09:32.673 "write": true, 00:09:32.673 "unmap": true, 00:09:32.673 "flush": true, 00:09:32.673 "reset": true, 00:09:32.673 "nvme_admin": false, 00:09:32.673 "nvme_io": false, 00:09:32.673 "nvme_io_md": false, 00:09:32.673 "write_zeroes": true, 00:09:32.673 "zcopy": true, 00:09:32.673 "get_zone_info": false, 00:09:32.673 "zone_management": false, 00:09:32.673 "zone_append": false, 00:09:32.673 "compare": false, 00:09:32.673 "compare_and_write": false, 00:09:32.673 "abort": true, 00:09:32.673 "seek_hole": false, 00:09:32.673 "seek_data": false, 00:09:32.673 "copy": true, 00:09:32.673 "nvme_iov_md": false 00:09:32.673 }, 00:09:32.673 "memory_domains": [ 00:09:32.673 { 00:09:32.673 "dma_device_id": "system", 00:09:32.673 "dma_device_type": 1 00:09:32.673 }, 00:09:32.673 { 00:09:32.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.673 "dma_device_type": 2 00:09:32.673 } 00:09:32.673 ], 00:09:32.673 "driver_specific": {} 00:09:32.673 }' 00:09:32.673 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:32.673 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:32.673 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:32.673 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:32.932 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:32.932 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:32.933 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:33.192 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:33.192 "name": "BaseBdev3", 00:09:33.192 "aliases": [ 00:09:33.192 "f173d51f-d215-4fb7-8458-3eb75ecb9bbc" 00:09:33.192 ], 00:09:33.192 "product_name": "Malloc disk", 00:09:33.192 "block_size": 512, 00:09:33.192 "num_blocks": 65536, 00:09:33.192 "uuid": "f173d51f-d215-4fb7-8458-3eb75ecb9bbc", 00:09:33.192 "assigned_rate_limits": { 00:09:33.192 "rw_ios_per_sec": 0, 00:09:33.192 "rw_mbytes_per_sec": 0, 00:09:33.192 "r_mbytes_per_sec": 0, 00:09:33.192 "w_mbytes_per_sec": 0 00:09:33.192 }, 00:09:33.192 "claimed": true, 00:09:33.192 "claim_type": "exclusive_write", 00:09:33.192 "zoned": false, 00:09:33.192 "supported_io_types": { 00:09:33.192 "read": true, 00:09:33.192 "write": true, 00:09:33.192 "unmap": true, 00:09:33.192 "flush": true, 00:09:33.192 "reset": true, 00:09:33.192 "nvme_admin": false, 00:09:33.192 "nvme_io": false, 00:09:33.192 "nvme_io_md": false, 00:09:33.192 "write_zeroes": true, 00:09:33.192 "zcopy": true, 00:09:33.192 "get_zone_info": false, 00:09:33.192 "zone_management": false, 00:09:33.192 "zone_append": false, 00:09:33.192 "compare": false, 00:09:33.192 "compare_and_write": false, 00:09:33.192 "abort": true, 00:09:33.192 "seek_hole": false, 00:09:33.192 "seek_data": false, 00:09:33.192 "copy": true, 00:09:33.192 "nvme_iov_md": false 00:09:33.192 }, 00:09:33.192 "memory_domains": [ 00:09:33.192 { 00:09:33.192 "dma_device_id": "system", 00:09:33.192 "dma_device_type": 1 00:09:33.192 }, 00:09:33.192 { 00:09:33.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.192 "dma_device_type": 2 00:09:33.192 } 00:09:33.192 ], 00:09:33.192 "driver_specific": {} 00:09:33.192 }' 00:09:33.192 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:33.192 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:33.452 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:33.452 06:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:33.452 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:33.711 [2024-08-13 06:04:35.381717] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.711 [2024-08-13 06:04:35.381844] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.711 [2024-08-13 06:04:35.381948] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.711 [2024-08-13 06:04:35.382022] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.711 [2024-08-13 06:04:35.382085] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 76078 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 76078 ']' 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 76078 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76078 00:09:33.711 killing process with pid 76078 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76078' 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 76078 00:09:33.711 [2024-08-13 06:04:35.441793] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.711 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 76078 00:09:33.711 [2024-08-13 06:04:35.472703] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.970 ************************************ 00:09:33.970 END TEST raid_state_function_test_sb 00:09:33.970 ************************************ 00:09:33.970 06:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:33.970 00:09:33.970 real 0m24.944s 00:09:33.970 user 0m46.390s 00:09:33.970 sys 0m3.809s 00:09:33.970 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.970 06:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.970 06:04:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:33.970 06:04:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:33.970 06:04:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.970 06:04:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.230 ************************************ 00:09:34.230 START TEST raid_superblock_test 00:09:34.230 ************************************ 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=76975 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 76975 /var/tmp/spdk-raid.sock 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 76975 ']' 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:34.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:34.230 06:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.230 [2024-08-13 06:04:35.860647] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:09:34.230 [2024-08-13 06:04:35.860906] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76975 ] 00:09:34.230 [2024-08-13 06:04:36.010394] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.489 [2024-08-13 06:04:36.059644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.489 [2024-08-13 06:04:36.102663] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.489 [2024-08-13 06:04:36.102781] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.056 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.057 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:35.316 malloc1 00:09:35.316 06:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.316 [2024-08-13 06:04:37.091016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.316 [2024-08-13 06:04:37.091191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.316 [2024-08-13 06:04:37.091234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:35.316 [2024-08-13 06:04:37.091273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.316 [2024-08-13 06:04:37.093551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.316 [2024-08-13 06:04:37.093637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.316 pt1 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:35.575 malloc2 00:09:35.575 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.834 [2024-08-13 06:04:37.515189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.834 [2024-08-13 06:04:37.515267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.834 [2024-08-13 06:04:37.515289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:35.834 [2024-08-13 06:04:37.515298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.834 [2024-08-13 06:04:37.517503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.834 [2024-08-13 06:04:37.517544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.834 pt2 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.834 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:36.097 malloc3 00:09:36.097 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:36.357 [2024-08-13 06:04:37.966759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:36.357 [2024-08-13 06:04:37.966937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.357 [2024-08-13 06:04:37.966983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:36.357 [2024-08-13 06:04:37.967013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.357 [2024-08-13 06:04:37.969195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.357 [2024-08-13 06:04:37.969275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:36.357 pt3 00:09:36.357 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:09:36.357 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:36.357 06:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:36.616 [2024-08-13 06:04:38.174520] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.616 [2024-08-13 06:04:38.176634] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.616 [2024-08-13 06:04:38.176758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.616 [2024-08-13 06:04:38.176971] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:36.616 [2024-08-13 06:04:38.177039] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:36.616 [2024-08-13 06:04:38.177451] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:36.616 [2024-08-13 06:04:38.177652] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:36.616 [2024-08-13 06:04:38.177696] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:36.616 [2024-08-13 06:04:38.177895] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.616 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.875 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.875 "name": "raid_bdev1", 00:09:36.875 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:36.875 "strip_size_kb": 64, 00:09:36.875 "state": "online", 00:09:36.875 "raid_level": "raid0", 00:09:36.875 "superblock": true, 00:09:36.875 "num_base_bdevs": 3, 00:09:36.875 "num_base_bdevs_discovered": 3, 00:09:36.875 "num_base_bdevs_operational": 3, 00:09:36.875 "base_bdevs_list": [ 00:09:36.875 { 00:09:36.875 "name": "pt1", 00:09:36.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.875 "is_configured": true, 00:09:36.875 "data_offset": 2048, 00:09:36.875 "data_size": 63488 00:09:36.875 }, 00:09:36.875 { 00:09:36.875 "name": "pt2", 00:09:36.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.876 "is_configured": true, 00:09:36.876 "data_offset": 2048, 00:09:36.876 "data_size": 63488 00:09:36.876 }, 00:09:36.876 { 00:09:36.876 "name": "pt3", 00:09:36.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.876 "is_configured": true, 00:09:36.876 "data_offset": 2048, 00:09:36.876 "data_size": 63488 00:09:36.876 } 00:09:36.876 ] 00:09:36.876 }' 00:09:36.876 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.876 06:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:37.444 06:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:37.444 [2024-08-13 06:04:39.129143] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.445 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:37.445 "name": "raid_bdev1", 00:09:37.445 "aliases": [ 00:09:37.445 "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b" 00:09:37.445 ], 00:09:37.445 "product_name": "Raid Volume", 00:09:37.445 "block_size": 512, 00:09:37.445 "num_blocks": 190464, 00:09:37.445 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:37.445 "assigned_rate_limits": { 00:09:37.445 "rw_ios_per_sec": 0, 00:09:37.445 "rw_mbytes_per_sec": 0, 00:09:37.445 "r_mbytes_per_sec": 0, 00:09:37.445 "w_mbytes_per_sec": 0 00:09:37.445 }, 00:09:37.445 "claimed": false, 00:09:37.445 "zoned": false, 00:09:37.445 "supported_io_types": { 00:09:37.445 "read": true, 00:09:37.445 "write": true, 00:09:37.445 "unmap": true, 00:09:37.445 "flush": true, 00:09:37.445 "reset": true, 00:09:37.445 "nvme_admin": false, 00:09:37.445 "nvme_io": false, 00:09:37.445 "nvme_io_md": false, 00:09:37.445 "write_zeroes": true, 00:09:37.445 "zcopy": false, 00:09:37.445 "get_zone_info": false, 00:09:37.445 "zone_management": false, 00:09:37.445 "zone_append": false, 00:09:37.445 "compare": false, 00:09:37.445 "compare_and_write": false, 00:09:37.445 "abort": false, 00:09:37.445 "seek_hole": false, 00:09:37.445 "seek_data": false, 00:09:37.445 "copy": false, 00:09:37.445 "nvme_iov_md": false 00:09:37.445 }, 00:09:37.445 "memory_domains": [ 00:09:37.445 { 00:09:37.445 "dma_device_id": "system", 00:09:37.445 "dma_device_type": 1 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.445 "dma_device_type": 2 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "dma_device_id": "system", 00:09:37.445 "dma_device_type": 1 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.445 "dma_device_type": 2 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "dma_device_id": "system", 00:09:37.445 "dma_device_type": 1 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.445 "dma_device_type": 2 00:09:37.445 } 00:09:37.445 ], 00:09:37.445 "driver_specific": { 00:09:37.445 "raid": { 00:09:37.445 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:37.445 "strip_size_kb": 64, 00:09:37.445 "state": "online", 00:09:37.445 "raid_level": "raid0", 00:09:37.445 "superblock": true, 00:09:37.445 "num_base_bdevs": 3, 00:09:37.445 "num_base_bdevs_discovered": 3, 00:09:37.445 "num_base_bdevs_operational": 3, 00:09:37.445 "base_bdevs_list": [ 00:09:37.445 { 00:09:37.445 "name": "pt1", 00:09:37.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.445 "is_configured": true, 00:09:37.445 "data_offset": 2048, 00:09:37.445 "data_size": 63488 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "name": "pt2", 00:09:37.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.445 "is_configured": true, 00:09:37.445 "data_offset": 2048, 00:09:37.445 "data_size": 63488 00:09:37.445 }, 00:09:37.445 { 00:09:37.445 "name": "pt3", 00:09:37.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.445 "is_configured": true, 00:09:37.445 "data_offset": 2048, 00:09:37.445 "data_size": 63488 00:09:37.445 } 00:09:37.445 ] 00:09:37.445 } 00:09:37.445 } 00:09:37.445 }' 00:09:37.445 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.445 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:37.445 pt2 00:09:37.445 pt3' 00:09:37.445 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:37.445 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:37.445 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:37.704 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:37.704 "name": "pt1", 00:09:37.704 "aliases": [ 00:09:37.704 "00000000-0000-0000-0000-000000000001" 00:09:37.704 ], 00:09:37.704 "product_name": "passthru", 00:09:37.704 "block_size": 512, 00:09:37.704 "num_blocks": 65536, 00:09:37.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.704 "assigned_rate_limits": { 00:09:37.704 "rw_ios_per_sec": 0, 00:09:37.704 "rw_mbytes_per_sec": 0, 00:09:37.704 "r_mbytes_per_sec": 0, 00:09:37.704 "w_mbytes_per_sec": 0 00:09:37.704 }, 00:09:37.704 "claimed": true, 00:09:37.704 "claim_type": "exclusive_write", 00:09:37.704 "zoned": false, 00:09:37.704 "supported_io_types": { 00:09:37.704 "read": true, 00:09:37.704 "write": true, 00:09:37.704 "unmap": true, 00:09:37.704 "flush": true, 00:09:37.704 "reset": true, 00:09:37.704 "nvme_admin": false, 00:09:37.704 "nvme_io": false, 00:09:37.704 "nvme_io_md": false, 00:09:37.704 "write_zeroes": true, 00:09:37.704 "zcopy": true, 00:09:37.704 "get_zone_info": false, 00:09:37.704 "zone_management": false, 00:09:37.704 "zone_append": false, 00:09:37.704 "compare": false, 00:09:37.704 "compare_and_write": false, 00:09:37.704 "abort": true, 00:09:37.704 "seek_hole": false, 00:09:37.704 "seek_data": false, 00:09:37.704 "copy": true, 00:09:37.704 "nvme_iov_md": false 00:09:37.704 }, 00:09:37.704 "memory_domains": [ 00:09:37.704 { 00:09:37.704 "dma_device_id": "system", 00:09:37.704 "dma_device_type": 1 00:09:37.704 }, 00:09:37.704 { 00:09:37.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.704 "dma_device_type": 2 00:09:37.704 } 00:09:37.704 ], 00:09:37.704 "driver_specific": { 00:09:37.704 "passthru": { 00:09:37.704 "name": "pt1", 00:09:37.704 "base_bdev_name": "malloc1" 00:09:37.704 } 00:09:37.704 } 00:09:37.704 }' 00:09:37.704 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.704 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:37.704 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:37.704 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:37.964 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:38.223 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:38.223 "name": "pt2", 00:09:38.223 "aliases": [ 00:09:38.223 "00000000-0000-0000-0000-000000000002" 00:09:38.223 ], 00:09:38.223 "product_name": "passthru", 00:09:38.223 "block_size": 512, 00:09:38.223 "num_blocks": 65536, 00:09:38.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.223 "assigned_rate_limits": { 00:09:38.223 "rw_ios_per_sec": 0, 00:09:38.223 "rw_mbytes_per_sec": 0, 00:09:38.223 "r_mbytes_per_sec": 0, 00:09:38.223 "w_mbytes_per_sec": 0 00:09:38.223 }, 00:09:38.223 "claimed": true, 00:09:38.223 "claim_type": "exclusive_write", 00:09:38.223 "zoned": false, 00:09:38.223 "supported_io_types": { 00:09:38.223 "read": true, 00:09:38.223 "write": true, 00:09:38.223 "unmap": true, 00:09:38.223 "flush": true, 00:09:38.223 "reset": true, 00:09:38.223 "nvme_admin": false, 00:09:38.223 "nvme_io": false, 00:09:38.223 "nvme_io_md": false, 00:09:38.223 "write_zeroes": true, 00:09:38.223 "zcopy": true, 00:09:38.223 "get_zone_info": false, 00:09:38.223 "zone_management": false, 00:09:38.223 "zone_append": false, 00:09:38.223 "compare": false, 00:09:38.223 "compare_and_write": false, 00:09:38.223 "abort": true, 00:09:38.223 "seek_hole": false, 00:09:38.223 "seek_data": false, 00:09:38.223 "copy": true, 00:09:38.223 "nvme_iov_md": false 00:09:38.223 }, 00:09:38.223 "memory_domains": [ 00:09:38.223 { 00:09:38.223 "dma_device_id": "system", 00:09:38.223 "dma_device_type": 1 00:09:38.223 }, 00:09:38.223 { 00:09:38.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.223 "dma_device_type": 2 00:09:38.223 } 00:09:38.223 ], 00:09:38.223 "driver_specific": { 00:09:38.223 "passthru": { 00:09:38.223 "name": "pt2", 00:09:38.223 "base_bdev_name": "malloc2" 00:09:38.223 } 00:09:38.223 } 00:09:38.223 }' 00:09:38.223 06:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:38.223 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:38.483 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:38.742 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:38.742 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:38.742 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:38.742 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:39.000 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:39.000 "name": "pt3", 00:09:39.000 "aliases": [ 00:09:39.000 "00000000-0000-0000-0000-000000000003" 00:09:39.000 ], 00:09:39.000 "product_name": "passthru", 00:09:39.000 "block_size": 512, 00:09:39.000 "num_blocks": 65536, 00:09:39.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.000 "assigned_rate_limits": { 00:09:39.000 "rw_ios_per_sec": 0, 00:09:39.000 "rw_mbytes_per_sec": 0, 00:09:39.000 "r_mbytes_per_sec": 0, 00:09:39.000 "w_mbytes_per_sec": 0 00:09:39.000 }, 00:09:39.000 "claimed": true, 00:09:39.000 "claim_type": "exclusive_write", 00:09:39.000 "zoned": false, 00:09:39.000 "supported_io_types": { 00:09:39.000 "read": true, 00:09:39.000 "write": true, 00:09:39.000 "unmap": true, 00:09:39.000 "flush": true, 00:09:39.000 "reset": true, 00:09:39.000 "nvme_admin": false, 00:09:39.000 "nvme_io": false, 00:09:39.000 "nvme_io_md": false, 00:09:39.000 "write_zeroes": true, 00:09:39.000 "zcopy": true, 00:09:39.000 "get_zone_info": false, 00:09:39.000 "zone_management": false, 00:09:39.000 "zone_append": false, 00:09:39.000 "compare": false, 00:09:39.000 "compare_and_write": false, 00:09:39.000 "abort": true, 00:09:39.000 "seek_hole": false, 00:09:39.000 "seek_data": false, 00:09:39.000 "copy": true, 00:09:39.000 "nvme_iov_md": false 00:09:39.000 }, 00:09:39.000 "memory_domains": [ 00:09:39.000 { 00:09:39.000 "dma_device_id": "system", 00:09:39.000 "dma_device_type": 1 00:09:39.000 }, 00:09:39.000 { 00:09:39.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.000 "dma_device_type": 2 00:09:39.001 } 00:09:39.001 ], 00:09:39.001 "driver_specific": { 00:09:39.001 "passthru": { 00:09:39.001 "name": "pt3", 00:09:39.001 "base_bdev_name": "malloc3" 00:09:39.001 } 00:09:39.001 } 00:09:39.001 }' 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.001 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.260 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:39.260 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.260 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.260 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:39.260 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:39.260 06:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:09:39.519 [2024-08-13 06:04:41.093776] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.519 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=707b7eb6-e1dc-4838-ba4a-ac02cae9f56b 00:09:39.519 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 707b7eb6-e1dc-4838-ba4a-ac02cae9f56b ']' 00:09:39.519 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:39.519 [2024-08-13 06:04:41.293149] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.519 [2024-08-13 06:04:41.293189] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.519 [2024-08-13 06:04:41.293302] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.519 [2024-08-13 06:04:41.293370] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.519 [2024-08-13 06:04:41.293380] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:39.779 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.779 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:09:39.779 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:09:39.779 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:09:39.779 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.779 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:40.038 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.038 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:40.298 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.298 06:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:40.298 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:40.298 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:40.557 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:40.817 [2024-08-13 06:04:42.463256] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.817 [2024-08-13 06:04:42.465254] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.817 [2024-08-13 06:04:42.465313] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:40.817 [2024-08-13 06:04:42.465376] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.817 [2024-08-13 06:04:42.465436] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.817 [2024-08-13 06:04:42.465458] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:40.817 [2024-08-13 06:04:42.465475] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.817 [2024-08-13 06:04:42.465486] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:40.817 request: 00:09:40.817 { 00:09:40.817 "name": "raid_bdev1", 00:09:40.817 "raid_level": "raid0", 00:09:40.817 "base_bdevs": [ 00:09:40.817 "malloc1", 00:09:40.817 "malloc2", 00:09:40.817 "malloc3" 00:09:40.817 ], 00:09:40.817 "strip_size_kb": 64, 00:09:40.817 "superblock": false, 00:09:40.817 "method": "bdev_raid_create", 00:09:40.817 "req_id": 1 00:09:40.817 } 00:09:40.817 Got JSON-RPC error response 00:09:40.817 response: 00:09:40.817 { 00:09:40.817 "code": -17, 00:09:40.817 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.817 } 00:09:40.817 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:09:40.817 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:09:40.817 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:09:40.817 06:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:09:40.817 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.817 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.077 [2024-08-13 06:04:42.846449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.077 [2024-08-13 06:04:42.846516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.077 [2024-08-13 06:04:42.846536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:41.077 [2024-08-13 06:04:42.846544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.077 [2024-08-13 06:04:42.848656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.077 [2024-08-13 06:04:42.848693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.077 [2024-08-13 06:04:42.848804] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:41.077 [2024-08-13 06:04:42.848838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.077 pt1 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.077 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.337 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.337 06:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.337 06:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.337 "name": "raid_bdev1", 00:09:41.337 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:41.337 "strip_size_kb": 64, 00:09:41.337 "state": "configuring", 00:09:41.337 "raid_level": "raid0", 00:09:41.337 "superblock": true, 00:09:41.337 "num_base_bdevs": 3, 00:09:41.337 "num_base_bdevs_discovered": 1, 00:09:41.337 "num_base_bdevs_operational": 3, 00:09:41.337 "base_bdevs_list": [ 00:09:41.337 { 00:09:41.337 "name": "pt1", 00:09:41.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.337 "is_configured": true, 00:09:41.337 "data_offset": 2048, 00:09:41.337 "data_size": 63488 00:09:41.337 }, 00:09:41.337 { 00:09:41.337 "name": null, 00:09:41.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.337 "is_configured": false, 00:09:41.337 "data_offset": 2048, 00:09:41.337 "data_size": 63488 00:09:41.337 }, 00:09:41.337 { 00:09:41.337 "name": null, 00:09:41.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.337 "is_configured": false, 00:09:41.337 "data_offset": 2048, 00:09:41.337 "data_size": 63488 00:09:41.337 } 00:09:41.337 ] 00:09:41.337 }' 00:09:41.337 06:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.337 06:04:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.905 06:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:09:41.905 06:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.174 [2024-08-13 06:04:43.808820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.174 [2024-08-13 06:04:43.808990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.174 [2024-08-13 06:04:43.809042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:42.174 [2024-08-13 06:04:43.809072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.174 [2024-08-13 06:04:43.809517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.174 [2024-08-13 06:04:43.809576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.174 [2024-08-13 06:04:43.809684] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.174 [2024-08-13 06:04:43.809735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.174 pt2 00:09:42.174 06:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:42.435 [2024-08-13 06:04:44.008545] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.435 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.695 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.695 "name": "raid_bdev1", 00:09:42.695 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:42.695 "strip_size_kb": 64, 00:09:42.695 "state": "configuring", 00:09:42.695 "raid_level": "raid0", 00:09:42.695 "superblock": true, 00:09:42.695 "num_base_bdevs": 3, 00:09:42.695 "num_base_bdevs_discovered": 1, 00:09:42.695 "num_base_bdevs_operational": 3, 00:09:42.695 "base_bdevs_list": [ 00:09:42.695 { 00:09:42.695 "name": "pt1", 00:09:42.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.695 "is_configured": true, 00:09:42.695 "data_offset": 2048, 00:09:42.695 "data_size": 63488 00:09:42.695 }, 00:09:42.695 { 00:09:42.695 "name": null, 00:09:42.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.695 "is_configured": false, 00:09:42.695 "data_offset": 2048, 00:09:42.695 "data_size": 63488 00:09:42.695 }, 00:09:42.695 { 00:09:42.695 "name": null, 00:09:42.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.695 "is_configured": false, 00:09:42.695 "data_offset": 2048, 00:09:42.695 "data_size": 63488 00:09:42.695 } 00:09:42.695 ] 00:09:42.695 }' 00:09:42.695 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.695 06:04:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.264 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:09:43.264 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:09:43.264 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.264 [2024-08-13 06:04:44.930894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.264 [2024-08-13 06:04:44.930971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.264 [2024-08-13 06:04:44.930990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:43.264 [2024-08-13 06:04:44.931001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.264 [2024-08-13 06:04:44.931443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.264 [2024-08-13 06:04:44.931468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.264 [2024-08-13 06:04:44.931559] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.264 [2024-08-13 06:04:44.931586] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.264 pt2 00:09:43.264 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:09:43.264 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:09:43.264 06:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.524 [2024-08-13 06:04:45.130556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.524 [2024-08-13 06:04:45.130713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.524 [2024-08-13 06:04:45.130748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:43.524 [2024-08-13 06:04:45.130779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.524 [2024-08-13 06:04:45.131207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.524 [2024-08-13 06:04:45.131272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.524 [2024-08-13 06:04:45.131374] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.524 [2024-08-13 06:04:45.131425] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.524 [2024-08-13 06:04:45.131553] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:43.524 [2024-08-13 06:04:45.131643] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.524 [2024-08-13 06:04:45.131893] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:43.524 [2024-08-13 06:04:45.132059] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:43.524 [2024-08-13 06:04:45.132099] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:43.524 [2024-08-13 06:04:45.132235] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.524 pt3 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.524 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.784 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:43.784 "name": "raid_bdev1", 00:09:43.784 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:43.784 "strip_size_kb": 64, 00:09:43.784 "state": "online", 00:09:43.784 "raid_level": "raid0", 00:09:43.784 "superblock": true, 00:09:43.784 "num_base_bdevs": 3, 00:09:43.784 "num_base_bdevs_discovered": 3, 00:09:43.784 "num_base_bdevs_operational": 3, 00:09:43.784 "base_bdevs_list": [ 00:09:43.784 { 00:09:43.784 "name": "pt1", 00:09:43.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.784 "is_configured": true, 00:09:43.784 "data_offset": 2048, 00:09:43.784 "data_size": 63488 00:09:43.784 }, 00:09:43.784 { 00:09:43.784 "name": "pt2", 00:09:43.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.784 "is_configured": true, 00:09:43.784 "data_offset": 2048, 00:09:43.784 "data_size": 63488 00:09:43.784 }, 00:09:43.784 { 00:09:43.784 "name": "pt3", 00:09:43.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.784 "is_configured": true, 00:09:43.784 "data_offset": 2048, 00:09:43.784 "data_size": 63488 00:09:43.784 } 00:09:43.784 ] 00:09:43.784 }' 00:09:43.784 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:43.784 06:04:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:44.353 06:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:44.353 [2024-08-13 06:04:46.065307] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.353 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:44.353 "name": "raid_bdev1", 00:09:44.353 "aliases": [ 00:09:44.353 "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b" 00:09:44.353 ], 00:09:44.353 "product_name": "Raid Volume", 00:09:44.353 "block_size": 512, 00:09:44.353 "num_blocks": 190464, 00:09:44.353 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:44.353 "assigned_rate_limits": { 00:09:44.353 "rw_ios_per_sec": 0, 00:09:44.353 "rw_mbytes_per_sec": 0, 00:09:44.353 "r_mbytes_per_sec": 0, 00:09:44.353 "w_mbytes_per_sec": 0 00:09:44.353 }, 00:09:44.353 "claimed": false, 00:09:44.353 "zoned": false, 00:09:44.353 "supported_io_types": { 00:09:44.353 "read": true, 00:09:44.353 "write": true, 00:09:44.353 "unmap": true, 00:09:44.353 "flush": true, 00:09:44.353 "reset": true, 00:09:44.353 "nvme_admin": false, 00:09:44.353 "nvme_io": false, 00:09:44.353 "nvme_io_md": false, 00:09:44.353 "write_zeroes": true, 00:09:44.353 "zcopy": false, 00:09:44.353 "get_zone_info": false, 00:09:44.353 "zone_management": false, 00:09:44.353 "zone_append": false, 00:09:44.353 "compare": false, 00:09:44.353 "compare_and_write": false, 00:09:44.353 "abort": false, 00:09:44.353 "seek_hole": false, 00:09:44.353 "seek_data": false, 00:09:44.353 "copy": false, 00:09:44.353 "nvme_iov_md": false 00:09:44.353 }, 00:09:44.353 "memory_domains": [ 00:09:44.353 { 00:09:44.353 "dma_device_id": "system", 00:09:44.353 "dma_device_type": 1 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.353 "dma_device_type": 2 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "dma_device_id": "system", 00:09:44.353 "dma_device_type": 1 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.353 "dma_device_type": 2 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "dma_device_id": "system", 00:09:44.353 "dma_device_type": 1 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.353 "dma_device_type": 2 00:09:44.353 } 00:09:44.353 ], 00:09:44.353 "driver_specific": { 00:09:44.353 "raid": { 00:09:44.353 "uuid": "707b7eb6-e1dc-4838-ba4a-ac02cae9f56b", 00:09:44.353 "strip_size_kb": 64, 00:09:44.353 "state": "online", 00:09:44.353 "raid_level": "raid0", 00:09:44.353 "superblock": true, 00:09:44.353 "num_base_bdevs": 3, 00:09:44.353 "num_base_bdevs_discovered": 3, 00:09:44.353 "num_base_bdevs_operational": 3, 00:09:44.353 "base_bdevs_list": [ 00:09:44.353 { 00:09:44.353 "name": "pt1", 00:09:44.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.353 "is_configured": true, 00:09:44.353 "data_offset": 2048, 00:09:44.353 "data_size": 63488 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "name": "pt2", 00:09:44.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.353 "is_configured": true, 00:09:44.353 "data_offset": 2048, 00:09:44.353 "data_size": 63488 00:09:44.353 }, 00:09:44.353 { 00:09:44.353 "name": "pt3", 00:09:44.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.353 "is_configured": true, 00:09:44.353 "data_offset": 2048, 00:09:44.353 "data_size": 63488 00:09:44.353 } 00:09:44.353 ] 00:09:44.353 } 00:09:44.353 } 00:09:44.353 }' 00:09:44.353 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.353 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:44.353 pt2 00:09:44.353 pt3' 00:09:44.353 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:44.353 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:44.353 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:44.613 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:44.613 "name": "pt1", 00:09:44.613 "aliases": [ 00:09:44.613 "00000000-0000-0000-0000-000000000001" 00:09:44.613 ], 00:09:44.613 "product_name": "passthru", 00:09:44.613 "block_size": 512, 00:09:44.613 "num_blocks": 65536, 00:09:44.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.613 "assigned_rate_limits": { 00:09:44.613 "rw_ios_per_sec": 0, 00:09:44.613 "rw_mbytes_per_sec": 0, 00:09:44.613 "r_mbytes_per_sec": 0, 00:09:44.613 "w_mbytes_per_sec": 0 00:09:44.613 }, 00:09:44.613 "claimed": true, 00:09:44.613 "claim_type": "exclusive_write", 00:09:44.613 "zoned": false, 00:09:44.613 "supported_io_types": { 00:09:44.613 "read": true, 00:09:44.613 "write": true, 00:09:44.613 "unmap": true, 00:09:44.613 "flush": true, 00:09:44.613 "reset": true, 00:09:44.613 "nvme_admin": false, 00:09:44.613 "nvme_io": false, 00:09:44.613 "nvme_io_md": false, 00:09:44.613 "write_zeroes": true, 00:09:44.613 "zcopy": true, 00:09:44.613 "get_zone_info": false, 00:09:44.613 "zone_management": false, 00:09:44.613 "zone_append": false, 00:09:44.613 "compare": false, 00:09:44.613 "compare_and_write": false, 00:09:44.613 "abort": true, 00:09:44.613 "seek_hole": false, 00:09:44.613 "seek_data": false, 00:09:44.613 "copy": true, 00:09:44.613 "nvme_iov_md": false 00:09:44.613 }, 00:09:44.613 "memory_domains": [ 00:09:44.613 { 00:09:44.613 "dma_device_id": "system", 00:09:44.613 "dma_device_type": 1 00:09:44.613 }, 00:09:44.613 { 00:09:44.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.613 "dma_device_type": 2 00:09:44.613 } 00:09:44.613 ], 00:09:44.613 "driver_specific": { 00:09:44.613 "passthru": { 00:09:44.613 "name": "pt1", 00:09:44.613 "base_bdev_name": "malloc1" 00:09:44.613 } 00:09:44.613 } 00:09:44.613 }' 00:09:44.613 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.613 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.872 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:45.131 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:45.131 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:45.131 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:45.131 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:45.131 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:45.131 "name": "pt2", 00:09:45.131 "aliases": [ 00:09:45.131 "00000000-0000-0000-0000-000000000002" 00:09:45.131 ], 00:09:45.131 "product_name": "passthru", 00:09:45.131 "block_size": 512, 00:09:45.131 "num_blocks": 65536, 00:09:45.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.132 "assigned_rate_limits": { 00:09:45.132 "rw_ios_per_sec": 0, 00:09:45.132 "rw_mbytes_per_sec": 0, 00:09:45.132 "r_mbytes_per_sec": 0, 00:09:45.132 "w_mbytes_per_sec": 0 00:09:45.132 }, 00:09:45.132 "claimed": true, 00:09:45.132 "claim_type": "exclusive_write", 00:09:45.132 "zoned": false, 00:09:45.132 "supported_io_types": { 00:09:45.132 "read": true, 00:09:45.132 "write": true, 00:09:45.132 "unmap": true, 00:09:45.132 "flush": true, 00:09:45.132 "reset": true, 00:09:45.132 "nvme_admin": false, 00:09:45.132 "nvme_io": false, 00:09:45.132 "nvme_io_md": false, 00:09:45.132 "write_zeroes": true, 00:09:45.132 "zcopy": true, 00:09:45.132 "get_zone_info": false, 00:09:45.132 "zone_management": false, 00:09:45.132 "zone_append": false, 00:09:45.132 "compare": false, 00:09:45.132 "compare_and_write": false, 00:09:45.132 "abort": true, 00:09:45.132 "seek_hole": false, 00:09:45.132 "seek_data": false, 00:09:45.132 "copy": true, 00:09:45.132 "nvme_iov_md": false 00:09:45.132 }, 00:09:45.132 "memory_domains": [ 00:09:45.132 { 00:09:45.132 "dma_device_id": "system", 00:09:45.132 "dma_device_type": 1 00:09:45.132 }, 00:09:45.132 { 00:09:45.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.132 "dma_device_type": 2 00:09:45.132 } 00:09:45.132 ], 00:09:45.132 "driver_specific": { 00:09:45.132 "passthru": { 00:09:45.132 "name": "pt2", 00:09:45.132 "base_bdev_name": "malloc2" 00:09:45.132 } 00:09:45.132 } 00:09:45.132 }' 00:09:45.132 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:45.391 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:45.391 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:45.391 06:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:45.391 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:45.391 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:45.391 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:45.391 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:45.391 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:45.391 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:45.651 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:45.651 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:45.651 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:45.651 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:45.651 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:45.910 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:45.910 "name": "pt3", 00:09:45.910 "aliases": [ 00:09:45.910 "00000000-0000-0000-0000-000000000003" 00:09:45.910 ], 00:09:45.910 "product_name": "passthru", 00:09:45.910 "block_size": 512, 00:09:45.910 "num_blocks": 65536, 00:09:45.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.910 "assigned_rate_limits": { 00:09:45.910 "rw_ios_per_sec": 0, 00:09:45.911 "rw_mbytes_per_sec": 0, 00:09:45.911 "r_mbytes_per_sec": 0, 00:09:45.911 "w_mbytes_per_sec": 0 00:09:45.911 }, 00:09:45.911 "claimed": true, 00:09:45.911 "claim_type": "exclusive_write", 00:09:45.911 "zoned": false, 00:09:45.911 "supported_io_types": { 00:09:45.911 "read": true, 00:09:45.911 "write": true, 00:09:45.911 "unmap": true, 00:09:45.911 "flush": true, 00:09:45.911 "reset": true, 00:09:45.911 "nvme_admin": false, 00:09:45.911 "nvme_io": false, 00:09:45.911 "nvme_io_md": false, 00:09:45.911 "write_zeroes": true, 00:09:45.911 "zcopy": true, 00:09:45.911 "get_zone_info": false, 00:09:45.911 "zone_management": false, 00:09:45.911 "zone_append": false, 00:09:45.911 "compare": false, 00:09:45.911 "compare_and_write": false, 00:09:45.911 "abort": true, 00:09:45.911 "seek_hole": false, 00:09:45.911 "seek_data": false, 00:09:45.911 "copy": true, 00:09:45.911 "nvme_iov_md": false 00:09:45.911 }, 00:09:45.911 "memory_domains": [ 00:09:45.911 { 00:09:45.911 "dma_device_id": "system", 00:09:45.911 "dma_device_type": 1 00:09:45.911 }, 00:09:45.911 { 00:09:45.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.911 "dma_device_type": 2 00:09:45.911 } 00:09:45.911 ], 00:09:45.911 "driver_specific": { 00:09:45.911 "passthru": { 00:09:45.911 "name": "pt3", 00:09:45.911 "base_bdev_name": "malloc3" 00:09:45.911 } 00:09:45.911 } 00:09:45.911 }' 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:45.911 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.170 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:46.170 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.170 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.170 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:46.170 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:09:46.170 06:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:46.430 [2024-08-13 06:04:47.986060] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 707b7eb6-e1dc-4838-ba4a-ac02cae9f56b '!=' 707b7eb6-e1dc-4838-ba4a-ac02cae9f56b ']' 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 76975 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 76975 ']' 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 76975 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76975 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76975' 00:09:46.430 killing process with pid 76975 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 76975 00:09:46.430 [2024-08-13 06:04:48.047398] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.430 [2024-08-13 06:04:48.047502] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.430 [2024-08-13 06:04:48.047564] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.430 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 76975 00:09:46.430 [2024-08-13 06:04:48.047578] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:46.430 [2024-08-13 06:04:48.080021] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.689 06:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:09:46.689 00:09:46.689 real 0m12.539s 00:09:46.689 user 0m22.785s 00:09:46.689 sys 0m1.936s 00:09:46.689 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:46.689 06:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.689 ************************************ 00:09:46.689 END TEST raid_superblock_test 00:09:46.689 ************************************ 00:09:46.689 06:04:48 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:46.689 06:04:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:46.689 06:04:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:46.689 06:04:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.689 ************************************ 00:09:46.689 START TEST raid_read_error_test 00:09:46.689 ************************************ 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 read 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:09:46.689 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.y4JAewimJi 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=77414 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 77414 /var/tmp/spdk-raid.sock 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 77414 ']' 00:09:46.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:46.690 06:04:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.949 [2024-08-13 06:04:48.483810] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:09:46.949 [2024-08-13 06:04:48.483930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77414 ] 00:09:46.949 [2024-08-13 06:04:48.628981] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.949 [2024-08-13 06:04:48.677535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.949 [2024-08-13 06:04:48.720024] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.949 [2024-08-13 06:04:48.720070] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.521 06:04:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:47.521 06:04:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:09:47.521 06:04:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:47.521 06:04:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:47.781 BaseBdev1_malloc 00:09:47.781 06:04:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:48.041 true 00:09:48.041 06:04:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.300 [2024-08-13 06:04:49.876016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.300 [2024-08-13 06:04:49.876211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.300 [2024-08-13 06:04:49.876257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:48.300 [2024-08-13 06:04:49.876292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.300 [2024-08-13 06:04:49.878581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.300 [2024-08-13 06:04:49.878662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.300 BaseBdev1 00:09:48.300 06:04:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:48.300 06:04:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.560 BaseBdev2_malloc 00:09:48.560 06:04:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:48.560 true 00:09:48.560 06:04:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.820 [2024-08-13 06:04:50.479816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.820 [2024-08-13 06:04:50.479986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.820 [2024-08-13 06:04:50.480030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:48.820 [2024-08-13 06:04:50.480090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.820 [2024-08-13 06:04:50.482337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.820 [2024-08-13 06:04:50.482433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.820 BaseBdev2 00:09:48.820 06:04:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:48.820 06:04:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.079 BaseBdev3_malloc 00:09:49.079 06:04:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:49.339 true 00:09:49.339 06:04:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.339 [2024-08-13 06:04:51.105845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.339 [2024-08-13 06:04:51.105997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.339 [2024-08-13 06:04:51.106067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:49.339 [2024-08-13 06:04:51.106102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.339 [2024-08-13 06:04:51.108189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.339 [2024-08-13 06:04:51.108283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.339 BaseBdev3 00:09:49.339 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:49.598 [2024-08-13 06:04:51.301600] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.598 [2024-08-13 06:04:51.303513] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.599 [2024-08-13 06:04:51.303643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.599 [2024-08-13 06:04:51.303871] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:49.599 [2024-08-13 06:04:51.303919] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.599 [2024-08-13 06:04:51.304239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:49.599 [2024-08-13 06:04:51.304427] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:49.599 [2024-08-13 06:04:51.304475] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:49.599 [2024-08-13 06:04:51.304651] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.599 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.858 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:49.858 "name": "raid_bdev1", 00:09:49.858 "uuid": "441bd604-f44d-4f91-b044-1af7ba3e22a3", 00:09:49.858 "strip_size_kb": 64, 00:09:49.858 "state": "online", 00:09:49.858 "raid_level": "raid0", 00:09:49.858 "superblock": true, 00:09:49.858 "num_base_bdevs": 3, 00:09:49.858 "num_base_bdevs_discovered": 3, 00:09:49.858 "num_base_bdevs_operational": 3, 00:09:49.858 "base_bdevs_list": [ 00:09:49.858 { 00:09:49.858 "name": "BaseBdev1", 00:09:49.858 "uuid": "b0c7d7b2-4654-52e4-bd77-550b76b2a403", 00:09:49.858 "is_configured": true, 00:09:49.858 "data_offset": 2048, 00:09:49.858 "data_size": 63488 00:09:49.858 }, 00:09:49.858 { 00:09:49.858 "name": "BaseBdev2", 00:09:49.858 "uuid": "aa4c2e3e-4aca-53b6-a0ac-0fa71ad4e660", 00:09:49.858 "is_configured": true, 00:09:49.858 "data_offset": 2048, 00:09:49.858 "data_size": 63488 00:09:49.858 }, 00:09:49.858 { 00:09:49.858 "name": "BaseBdev3", 00:09:49.858 "uuid": "ec2ddeba-3313-5494-9cc0-9935fbff947e", 00:09:49.858 "is_configured": true, 00:09:49.858 "data_offset": 2048, 00:09:49.858 "data_size": 63488 00:09:49.858 } 00:09:49.858 ] 00:09:49.858 }' 00:09:49.858 06:04:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:49.858 06:04:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.428 06:04:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:50.428 06:04:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:09:50.428 [2024-08-13 06:04:52.128567] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:51.367 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.627 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.887 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.887 "name": "raid_bdev1", 00:09:51.887 "uuid": "441bd604-f44d-4f91-b044-1af7ba3e22a3", 00:09:51.887 "strip_size_kb": 64, 00:09:51.887 "state": "online", 00:09:51.887 "raid_level": "raid0", 00:09:51.887 "superblock": true, 00:09:51.887 "num_base_bdevs": 3, 00:09:51.887 "num_base_bdevs_discovered": 3, 00:09:51.887 "num_base_bdevs_operational": 3, 00:09:51.887 "base_bdevs_list": [ 00:09:51.887 { 00:09:51.887 "name": "BaseBdev1", 00:09:51.887 "uuid": "b0c7d7b2-4654-52e4-bd77-550b76b2a403", 00:09:51.887 "is_configured": true, 00:09:51.887 "data_offset": 2048, 00:09:51.887 "data_size": 63488 00:09:51.887 }, 00:09:51.887 { 00:09:51.887 "name": "BaseBdev2", 00:09:51.887 "uuid": "aa4c2e3e-4aca-53b6-a0ac-0fa71ad4e660", 00:09:51.887 "is_configured": true, 00:09:51.887 "data_offset": 2048, 00:09:51.887 "data_size": 63488 00:09:51.887 }, 00:09:51.887 { 00:09:51.887 "name": "BaseBdev3", 00:09:51.887 "uuid": "ec2ddeba-3313-5494-9cc0-9935fbff947e", 00:09:51.887 "is_configured": true, 00:09:51.887 "data_offset": 2048, 00:09:51.887 "data_size": 63488 00:09:51.887 } 00:09:51.887 ] 00:09:51.887 }' 00:09:51.887 06:04:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.887 06:04:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.454 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:52.454 [2024-08-13 06:04:54.167383] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.454 [2024-08-13 06:04:54.167421] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.454 0 00:09:52.454 [2024-08-13 06:04:54.169755] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.455 [2024-08-13 06:04:54.169806] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.455 [2024-08-13 06:04:54.169842] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.455 [2024-08-13 06:04:54.169857] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 77414 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 77414 ']' 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 77414 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77414 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77414' 00:09:52.455 killing process with pid 77414 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 77414 00:09:52.455 [2024-08-13 06:04:54.214097] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.455 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 77414 00:09:52.455 [2024-08-13 06:04:54.238931] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.y4JAewimJi 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:09:52.714 00:09:52.714 real 0m6.090s 00:09:52.714 user 0m9.541s 00:09:52.714 sys 0m0.853s 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.714 ************************************ 00:09:52.714 END TEST raid_read_error_test 00:09:52.714 ************************************ 00:09:52.714 06:04:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.974 06:04:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:52.974 06:04:54 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:52.974 06:04:54 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.974 06:04:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.974 ************************************ 00:09:52.974 START TEST raid_write_error_test 00:09:52.974 ************************************ 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 write 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.VmGwgfwjMP 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=77584 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 77584 /var/tmp/spdk-raid.sock 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 77584 ']' 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:52.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:52.974 06:04:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.974 [2024-08-13 06:04:54.646338] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:09:52.974 [2024-08-13 06:04:54.646970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77584 ] 00:09:53.234 [2024-08-13 06:04:54.792440] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.234 [2024-08-13 06:04:54.840148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.234 [2024-08-13 06:04:54.883135] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.234 [2024-08-13 06:04:54.883236] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.804 06:04:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:53.804 06:04:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:09:53.804 06:04:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:53.804 06:04:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.064 BaseBdev1_malloc 00:09:54.064 06:04:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:54.323 true 00:09:54.323 06:04:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.323 [2024-08-13 06:04:56.071174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.323 [2024-08-13 06:04:56.071321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.323 [2024-08-13 06:04:56.071382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:54.323 [2024-08-13 06:04:56.071419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.323 [2024-08-13 06:04:56.073697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.323 [2024-08-13 06:04:56.073802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.323 BaseBdev1 00:09:54.323 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:54.323 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.582 BaseBdev2_malloc 00:09:54.582 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:54.842 true 00:09:54.842 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.102 [2024-08-13 06:04:56.674872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.102 [2024-08-13 06:04:56.675046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.102 [2024-08-13 06:04:56.675090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:55.102 [2024-08-13 06:04:56.675157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.102 [2024-08-13 06:04:56.677313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.102 [2024-08-13 06:04:56.677394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.102 BaseBdev2 00:09:55.102 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:55.102 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.361 BaseBdev3_malloc 00:09:55.361 06:04:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:55.361 true 00:09:55.361 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.620 [2024-08-13 06:04:57.309218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.620 [2024-08-13 06:04:57.309297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.620 [2024-08-13 06:04:57.309334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:55.620 [2024-08-13 06:04:57.309346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.620 [2024-08-13 06:04:57.311484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.620 [2024-08-13 06:04:57.311525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.620 BaseBdev3 00:09:55.620 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:55.879 [2024-08-13 06:04:57.508948] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.879 [2024-08-13 06:04:57.510919] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.879 [2024-08-13 06:04:57.511061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.879 [2024-08-13 06:04:57.511279] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:55.879 [2024-08-13 06:04:57.511326] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.879 [2024-08-13 06:04:57.511643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:55.879 [2024-08-13 06:04:57.511820] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:55.879 [2024-08-13 06:04:57.511865] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:55.879 [2024-08-13 06:04:57.512075] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.879 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.138 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:56.138 "name": "raid_bdev1", 00:09:56.138 "uuid": "ebf008d6-ff52-453c-8e7d-59096483a792", 00:09:56.138 "strip_size_kb": 64, 00:09:56.138 "state": "online", 00:09:56.138 "raid_level": "raid0", 00:09:56.138 "superblock": true, 00:09:56.138 "num_base_bdevs": 3, 00:09:56.138 "num_base_bdevs_discovered": 3, 00:09:56.138 "num_base_bdevs_operational": 3, 00:09:56.138 "base_bdevs_list": [ 00:09:56.138 { 00:09:56.138 "name": "BaseBdev1", 00:09:56.138 "uuid": "587edb97-4e12-50aa-9279-733e2d496c50", 00:09:56.138 "is_configured": true, 00:09:56.138 "data_offset": 2048, 00:09:56.138 "data_size": 63488 00:09:56.138 }, 00:09:56.138 { 00:09:56.138 "name": "BaseBdev2", 00:09:56.138 "uuid": "0cf0ff2e-4189-58c8-87cb-a057ce3fa60b", 00:09:56.138 "is_configured": true, 00:09:56.138 "data_offset": 2048, 00:09:56.138 "data_size": 63488 00:09:56.138 }, 00:09:56.138 { 00:09:56.138 "name": "BaseBdev3", 00:09:56.138 "uuid": "958a3115-0960-5c97-8d6a-f05f03adc6d7", 00:09:56.138 "is_configured": true, 00:09:56.138 "data_offset": 2048, 00:09:56.138 "data_size": 63488 00:09:56.138 } 00:09:56.138 ] 00:09:56.138 }' 00:09:56.138 06:04:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:56.138 06:04:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.706 06:04:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:09:56.706 06:04:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:56.706 [2024-08-13 06:04:58.351852] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:57.644 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.903 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.903 "name": "raid_bdev1", 00:09:57.903 "uuid": "ebf008d6-ff52-453c-8e7d-59096483a792", 00:09:57.903 "strip_size_kb": 64, 00:09:57.903 "state": "online", 00:09:57.903 "raid_level": "raid0", 00:09:57.903 "superblock": true, 00:09:57.903 "num_base_bdevs": 3, 00:09:57.903 "num_base_bdevs_discovered": 3, 00:09:57.903 "num_base_bdevs_operational": 3, 00:09:57.903 "base_bdevs_list": [ 00:09:57.903 { 00:09:57.904 "name": "BaseBdev1", 00:09:57.904 "uuid": "587edb97-4e12-50aa-9279-733e2d496c50", 00:09:57.904 "is_configured": true, 00:09:57.904 "data_offset": 2048, 00:09:57.904 "data_size": 63488 00:09:57.904 }, 00:09:57.904 { 00:09:57.904 "name": "BaseBdev2", 00:09:57.904 "uuid": "0cf0ff2e-4189-58c8-87cb-a057ce3fa60b", 00:09:57.904 "is_configured": true, 00:09:57.904 "data_offset": 2048, 00:09:57.904 "data_size": 63488 00:09:57.904 }, 00:09:57.904 { 00:09:57.904 "name": "BaseBdev3", 00:09:57.904 "uuid": "958a3115-0960-5c97-8d6a-f05f03adc6d7", 00:09:57.904 "is_configured": true, 00:09:57.904 "data_offset": 2048, 00:09:57.904 "data_size": 63488 00:09:57.904 } 00:09:57.904 ] 00:09:57.904 }' 00:09:57.904 06:04:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.904 06:04:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.473 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:58.732 [2024-08-13 06:05:00.403173] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.732 [2024-08-13 06:05:00.403277] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.732 [2024-08-13 06:05:00.405716] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.732 [2024-08-13 06:05:00.405772] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.732 [2024-08-13 06:05:00.405809] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.732 [2024-08-13 06:05:00.405818] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:58.732 0 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 77584 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 77584 ']' 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 77584 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77584 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77584' 00:09:58.732 killing process with pid 77584 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 77584 00:09:58.732 [2024-08-13 06:05:00.466742] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.732 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 77584 00:09:58.732 [2024-08-13 06:05:00.491392] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.VmGwgfwjMP 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:09:58.992 00:09:58.992 real 0m6.184s 00:09:58.992 user 0m9.682s 00:09:58.992 sys 0m0.890s 00:09:58.992 ************************************ 00:09:58.992 END TEST raid_write_error_test 00:09:58.992 ************************************ 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:58.992 06:05:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.309 06:05:00 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:09:59.309 06:05:00 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:59.309 06:05:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:59.309 06:05:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.309 06:05:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.309 ************************************ 00:09:59.309 START TEST raid_state_function_test 00:09:59.309 ************************************ 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=77756 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 77756' 00:09:59.309 Process raid pid: 77756 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 77756 /var/tmp/spdk-raid.sock 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 77756 ']' 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:59.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:59.309 06:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.309 [2024-08-13 06:05:00.895393] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:09:59.310 [2024-08-13 06:05:00.895602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.310 [2024-08-13 06:05:01.042862] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.310 [2024-08-13 06:05:01.088739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.569 [2024-08-13 06:05:01.131496] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.569 [2024-08-13 06:05:01.131525] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:00.139 [2024-08-13 06:05:01.899136] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.139 [2024-08-13 06:05:01.899289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.139 [2024-08-13 06:05:01.899322] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.139 [2024-08-13 06:05:01.899342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.139 [2024-08-13 06:05:01.899363] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.139 [2024-08-13 06:05:01.899381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.139 06:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.399 06:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.399 "name": "Existed_Raid", 00:10:00.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.399 "strip_size_kb": 64, 00:10:00.399 "state": "configuring", 00:10:00.399 "raid_level": "concat", 00:10:00.399 "superblock": false, 00:10:00.399 "num_base_bdevs": 3, 00:10:00.399 "num_base_bdevs_discovered": 0, 00:10:00.399 "num_base_bdevs_operational": 3, 00:10:00.399 "base_bdevs_list": [ 00:10:00.399 { 00:10:00.399 "name": "BaseBdev1", 00:10:00.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.399 "is_configured": false, 00:10:00.399 "data_offset": 0, 00:10:00.399 "data_size": 0 00:10:00.399 }, 00:10:00.399 { 00:10:00.399 "name": "BaseBdev2", 00:10:00.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.399 "is_configured": false, 00:10:00.399 "data_offset": 0, 00:10:00.400 "data_size": 0 00:10:00.400 }, 00:10:00.400 { 00:10:00.400 "name": "BaseBdev3", 00:10:00.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.400 "is_configured": false, 00:10:00.400 "data_offset": 0, 00:10:00.400 "data_size": 0 00:10:00.400 } 00:10:00.400 ] 00:10:00.400 }' 00:10:00.400 06:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.400 06:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.971 06:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:01.231 [2024-08-13 06:05:02.833382] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.231 [2024-08-13 06:05:02.833424] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:01.231 06:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:01.491 [2024-08-13 06:05:03.033100] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.491 [2024-08-13 06:05:03.033151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.491 [2024-08-13 06:05:03.033162] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.491 [2024-08-13 06:05:03.033170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.491 [2024-08-13 06:05:03.033178] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.491 [2024-08-13 06:05:03.033185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.491 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.492 [2024-08-13 06:05:03.217621] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.492 BaseBdev1 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:01.492 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:01.752 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.018 [ 00:10:02.018 { 00:10:02.018 "name": "BaseBdev1", 00:10:02.018 "aliases": [ 00:10:02.018 "1027293a-a3af-46a0-9bd2-ca9b3b06f889" 00:10:02.018 ], 00:10:02.018 "product_name": "Malloc disk", 00:10:02.018 "block_size": 512, 00:10:02.018 "num_blocks": 65536, 00:10:02.018 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:02.018 "assigned_rate_limits": { 00:10:02.018 "rw_ios_per_sec": 0, 00:10:02.018 "rw_mbytes_per_sec": 0, 00:10:02.018 "r_mbytes_per_sec": 0, 00:10:02.018 "w_mbytes_per_sec": 0 00:10:02.018 }, 00:10:02.018 "claimed": true, 00:10:02.018 "claim_type": "exclusive_write", 00:10:02.018 "zoned": false, 00:10:02.018 "supported_io_types": { 00:10:02.018 "read": true, 00:10:02.018 "write": true, 00:10:02.018 "unmap": true, 00:10:02.018 "flush": true, 00:10:02.018 "reset": true, 00:10:02.018 "nvme_admin": false, 00:10:02.018 "nvme_io": false, 00:10:02.018 "nvme_io_md": false, 00:10:02.018 "write_zeroes": true, 00:10:02.018 "zcopy": true, 00:10:02.018 "get_zone_info": false, 00:10:02.018 "zone_management": false, 00:10:02.018 "zone_append": false, 00:10:02.018 "compare": false, 00:10:02.018 "compare_and_write": false, 00:10:02.018 "abort": true, 00:10:02.018 "seek_hole": false, 00:10:02.018 "seek_data": false, 00:10:02.018 "copy": true, 00:10:02.018 "nvme_iov_md": false 00:10:02.018 }, 00:10:02.018 "memory_domains": [ 00:10:02.018 { 00:10:02.018 "dma_device_id": "system", 00:10:02.018 "dma_device_type": 1 00:10:02.018 }, 00:10:02.018 { 00:10:02.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.018 "dma_device_type": 2 00:10:02.018 } 00:10:02.018 ], 00:10:02.018 "driver_specific": {} 00:10:02.019 } 00:10:02.019 ] 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.019 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.278 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:02.278 "name": "Existed_Raid", 00:10:02.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.278 "strip_size_kb": 64, 00:10:02.278 "state": "configuring", 00:10:02.278 "raid_level": "concat", 00:10:02.278 "superblock": false, 00:10:02.278 "num_base_bdevs": 3, 00:10:02.278 "num_base_bdevs_discovered": 1, 00:10:02.278 "num_base_bdevs_operational": 3, 00:10:02.278 "base_bdevs_list": [ 00:10:02.278 { 00:10:02.278 "name": "BaseBdev1", 00:10:02.278 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:02.278 "is_configured": true, 00:10:02.278 "data_offset": 0, 00:10:02.278 "data_size": 65536 00:10:02.278 }, 00:10:02.278 { 00:10:02.278 "name": "BaseBdev2", 00:10:02.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.278 "is_configured": false, 00:10:02.278 "data_offset": 0, 00:10:02.278 "data_size": 0 00:10:02.278 }, 00:10:02.278 { 00:10:02.278 "name": "BaseBdev3", 00:10:02.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.278 "is_configured": false, 00:10:02.278 "data_offset": 0, 00:10:02.278 "data_size": 0 00:10:02.278 } 00:10:02.278 ] 00:10:02.278 }' 00:10:02.278 06:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:02.278 06:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.847 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:02.847 [2024-08-13 06:05:04.539476] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.847 [2024-08-13 06:05:04.539631] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:02.847 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:03.107 [2024-08-13 06:05:04.743219] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.107 [2024-08-13 06:05:04.745077] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.107 [2024-08-13 06:05:04.745154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.107 [2024-08-13 06:05:04.745184] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.107 [2024-08-13 06:05:04.745204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.107 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:03.107 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:03.107 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.108 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.367 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:03.367 "name": "Existed_Raid", 00:10:03.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.367 "strip_size_kb": 64, 00:10:03.367 "state": "configuring", 00:10:03.367 "raid_level": "concat", 00:10:03.367 "superblock": false, 00:10:03.367 "num_base_bdevs": 3, 00:10:03.367 "num_base_bdevs_discovered": 1, 00:10:03.367 "num_base_bdevs_operational": 3, 00:10:03.367 "base_bdevs_list": [ 00:10:03.367 { 00:10:03.367 "name": "BaseBdev1", 00:10:03.367 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:03.367 "is_configured": true, 00:10:03.367 "data_offset": 0, 00:10:03.367 "data_size": 65536 00:10:03.367 }, 00:10:03.367 { 00:10:03.367 "name": "BaseBdev2", 00:10:03.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.367 "is_configured": false, 00:10:03.367 "data_offset": 0, 00:10:03.367 "data_size": 0 00:10:03.367 }, 00:10:03.367 { 00:10:03.367 "name": "BaseBdev3", 00:10:03.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.367 "is_configured": false, 00:10:03.367 "data_offset": 0, 00:10:03.367 "data_size": 0 00:10:03.367 } 00:10:03.367 ] 00:10:03.367 }' 00:10:03.367 06:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:03.367 06:05:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.937 [2024-08-13 06:05:05.665560] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.937 BaseBdev2 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:03.937 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:04.197 06:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.456 [ 00:10:04.456 { 00:10:04.456 "name": "BaseBdev2", 00:10:04.456 "aliases": [ 00:10:04.457 "6d5c2bb4-c53e-428e-b113-4ffce52c4e84" 00:10:04.457 ], 00:10:04.457 "product_name": "Malloc disk", 00:10:04.457 "block_size": 512, 00:10:04.457 "num_blocks": 65536, 00:10:04.457 "uuid": "6d5c2bb4-c53e-428e-b113-4ffce52c4e84", 00:10:04.457 "assigned_rate_limits": { 00:10:04.457 "rw_ios_per_sec": 0, 00:10:04.457 "rw_mbytes_per_sec": 0, 00:10:04.457 "r_mbytes_per_sec": 0, 00:10:04.457 "w_mbytes_per_sec": 0 00:10:04.457 }, 00:10:04.457 "claimed": true, 00:10:04.457 "claim_type": "exclusive_write", 00:10:04.457 "zoned": false, 00:10:04.457 "supported_io_types": { 00:10:04.457 "read": true, 00:10:04.457 "write": true, 00:10:04.457 "unmap": true, 00:10:04.457 "flush": true, 00:10:04.457 "reset": true, 00:10:04.457 "nvme_admin": false, 00:10:04.457 "nvme_io": false, 00:10:04.457 "nvme_io_md": false, 00:10:04.457 "write_zeroes": true, 00:10:04.457 "zcopy": true, 00:10:04.457 "get_zone_info": false, 00:10:04.457 "zone_management": false, 00:10:04.457 "zone_append": false, 00:10:04.457 "compare": false, 00:10:04.457 "compare_and_write": false, 00:10:04.457 "abort": true, 00:10:04.457 "seek_hole": false, 00:10:04.457 "seek_data": false, 00:10:04.457 "copy": true, 00:10:04.457 "nvme_iov_md": false 00:10:04.457 }, 00:10:04.457 "memory_domains": [ 00:10:04.457 { 00:10:04.457 "dma_device_id": "system", 00:10:04.457 "dma_device_type": 1 00:10:04.457 }, 00:10:04.457 { 00:10:04.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.457 "dma_device_type": 2 00:10:04.457 } 00:10:04.457 ], 00:10:04.457 "driver_specific": {} 00:10:04.457 } 00:10:04.457 ] 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.457 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.717 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.717 "name": "Existed_Raid", 00:10:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.717 "strip_size_kb": 64, 00:10:04.717 "state": "configuring", 00:10:04.717 "raid_level": "concat", 00:10:04.717 "superblock": false, 00:10:04.717 "num_base_bdevs": 3, 00:10:04.717 "num_base_bdevs_discovered": 2, 00:10:04.717 "num_base_bdevs_operational": 3, 00:10:04.717 "base_bdevs_list": [ 00:10:04.717 { 00:10:04.717 "name": "BaseBdev1", 00:10:04.717 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:04.717 "is_configured": true, 00:10:04.717 "data_offset": 0, 00:10:04.717 "data_size": 65536 00:10:04.717 }, 00:10:04.717 { 00:10:04.717 "name": "BaseBdev2", 00:10:04.717 "uuid": "6d5c2bb4-c53e-428e-b113-4ffce52c4e84", 00:10:04.717 "is_configured": true, 00:10:04.717 "data_offset": 0, 00:10:04.717 "data_size": 65536 00:10:04.717 }, 00:10:04.717 { 00:10:04.717 "name": "BaseBdev3", 00:10:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.717 "is_configured": false, 00:10:04.717 "data_offset": 0, 00:10:04.717 "data_size": 0 00:10:04.717 } 00:10:04.717 ] 00:10:04.717 }' 00:10:04.717 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.717 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.286 [2024-08-13 06:05:06.978428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.286 [2024-08-13 06:05:06.978557] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:05.286 [2024-08-13 06:05:06.978593] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:05.286 [2024-08-13 06:05:06.978955] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:05.286 [2024-08-13 06:05:06.979136] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:05.286 [2024-08-13 06:05:06.979180] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:05.286 [2024-08-13 06:05:06.979407] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.286 BaseBdev3 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:05.286 06:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:05.546 06:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.806 [ 00:10:05.806 { 00:10:05.806 "name": "BaseBdev3", 00:10:05.806 "aliases": [ 00:10:05.806 "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac" 00:10:05.806 ], 00:10:05.806 "product_name": "Malloc disk", 00:10:05.806 "block_size": 512, 00:10:05.806 "num_blocks": 65536, 00:10:05.806 "uuid": "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac", 00:10:05.806 "assigned_rate_limits": { 00:10:05.806 "rw_ios_per_sec": 0, 00:10:05.806 "rw_mbytes_per_sec": 0, 00:10:05.806 "r_mbytes_per_sec": 0, 00:10:05.806 "w_mbytes_per_sec": 0 00:10:05.806 }, 00:10:05.806 "claimed": true, 00:10:05.806 "claim_type": "exclusive_write", 00:10:05.806 "zoned": false, 00:10:05.806 "supported_io_types": { 00:10:05.806 "read": true, 00:10:05.806 "write": true, 00:10:05.806 "unmap": true, 00:10:05.806 "flush": true, 00:10:05.806 "reset": true, 00:10:05.806 "nvme_admin": false, 00:10:05.806 "nvme_io": false, 00:10:05.806 "nvme_io_md": false, 00:10:05.806 "write_zeroes": true, 00:10:05.806 "zcopy": true, 00:10:05.806 "get_zone_info": false, 00:10:05.806 "zone_management": false, 00:10:05.806 "zone_append": false, 00:10:05.806 "compare": false, 00:10:05.806 "compare_and_write": false, 00:10:05.806 "abort": true, 00:10:05.806 "seek_hole": false, 00:10:05.806 "seek_data": false, 00:10:05.806 "copy": true, 00:10:05.806 "nvme_iov_md": false 00:10:05.806 }, 00:10:05.806 "memory_domains": [ 00:10:05.806 { 00:10:05.806 "dma_device_id": "system", 00:10:05.806 "dma_device_type": 1 00:10:05.806 }, 00:10:05.806 { 00:10:05.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.806 "dma_device_type": 2 00:10:05.806 } 00:10:05.806 ], 00:10:05.806 "driver_specific": {} 00:10:05.806 } 00:10:05.806 ] 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.806 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.065 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.065 "name": "Existed_Raid", 00:10:06.065 "uuid": "798acb1b-8ded-4b75-ab68-eccdcd065ecd", 00:10:06.065 "strip_size_kb": 64, 00:10:06.065 "state": "online", 00:10:06.065 "raid_level": "concat", 00:10:06.065 "superblock": false, 00:10:06.065 "num_base_bdevs": 3, 00:10:06.065 "num_base_bdevs_discovered": 3, 00:10:06.065 "num_base_bdevs_operational": 3, 00:10:06.065 "base_bdevs_list": [ 00:10:06.065 { 00:10:06.065 "name": "BaseBdev1", 00:10:06.065 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:06.065 "is_configured": true, 00:10:06.065 "data_offset": 0, 00:10:06.065 "data_size": 65536 00:10:06.065 }, 00:10:06.065 { 00:10:06.065 "name": "BaseBdev2", 00:10:06.065 "uuid": "6d5c2bb4-c53e-428e-b113-4ffce52c4e84", 00:10:06.065 "is_configured": true, 00:10:06.065 "data_offset": 0, 00:10:06.065 "data_size": 65536 00:10:06.065 }, 00:10:06.065 { 00:10:06.065 "name": "BaseBdev3", 00:10:06.065 "uuid": "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac", 00:10:06.065 "is_configured": true, 00:10:06.065 "data_offset": 0, 00:10:06.065 "data_size": 65536 00:10:06.065 } 00:10:06.065 ] 00:10:06.065 }' 00:10:06.065 06:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.065 06:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:06.635 [2024-08-13 06:05:08.360453] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:06.635 "name": "Existed_Raid", 00:10:06.635 "aliases": [ 00:10:06.635 "798acb1b-8ded-4b75-ab68-eccdcd065ecd" 00:10:06.635 ], 00:10:06.635 "product_name": "Raid Volume", 00:10:06.635 "block_size": 512, 00:10:06.635 "num_blocks": 196608, 00:10:06.635 "uuid": "798acb1b-8ded-4b75-ab68-eccdcd065ecd", 00:10:06.635 "assigned_rate_limits": { 00:10:06.635 "rw_ios_per_sec": 0, 00:10:06.635 "rw_mbytes_per_sec": 0, 00:10:06.635 "r_mbytes_per_sec": 0, 00:10:06.635 "w_mbytes_per_sec": 0 00:10:06.635 }, 00:10:06.635 "claimed": false, 00:10:06.635 "zoned": false, 00:10:06.635 "supported_io_types": { 00:10:06.635 "read": true, 00:10:06.635 "write": true, 00:10:06.635 "unmap": true, 00:10:06.635 "flush": true, 00:10:06.635 "reset": true, 00:10:06.635 "nvme_admin": false, 00:10:06.635 "nvme_io": false, 00:10:06.635 "nvme_io_md": false, 00:10:06.635 "write_zeroes": true, 00:10:06.635 "zcopy": false, 00:10:06.635 "get_zone_info": false, 00:10:06.635 "zone_management": false, 00:10:06.635 "zone_append": false, 00:10:06.635 "compare": false, 00:10:06.635 "compare_and_write": false, 00:10:06.635 "abort": false, 00:10:06.635 "seek_hole": false, 00:10:06.635 "seek_data": false, 00:10:06.635 "copy": false, 00:10:06.635 "nvme_iov_md": false 00:10:06.635 }, 00:10:06.635 "memory_domains": [ 00:10:06.635 { 00:10:06.635 "dma_device_id": "system", 00:10:06.635 "dma_device_type": 1 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.635 "dma_device_type": 2 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "dma_device_id": "system", 00:10:06.635 "dma_device_type": 1 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.635 "dma_device_type": 2 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "dma_device_id": "system", 00:10:06.635 "dma_device_type": 1 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.635 "dma_device_type": 2 00:10:06.635 } 00:10:06.635 ], 00:10:06.635 "driver_specific": { 00:10:06.635 "raid": { 00:10:06.635 "uuid": "798acb1b-8ded-4b75-ab68-eccdcd065ecd", 00:10:06.635 "strip_size_kb": 64, 00:10:06.635 "state": "online", 00:10:06.635 "raid_level": "concat", 00:10:06.635 "superblock": false, 00:10:06.635 "num_base_bdevs": 3, 00:10:06.635 "num_base_bdevs_discovered": 3, 00:10:06.635 "num_base_bdevs_operational": 3, 00:10:06.635 "base_bdevs_list": [ 00:10:06.635 { 00:10:06.635 "name": "BaseBdev1", 00:10:06.635 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:06.635 "is_configured": true, 00:10:06.635 "data_offset": 0, 00:10:06.635 "data_size": 65536 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "name": "BaseBdev2", 00:10:06.635 "uuid": "6d5c2bb4-c53e-428e-b113-4ffce52c4e84", 00:10:06.635 "is_configured": true, 00:10:06.635 "data_offset": 0, 00:10:06.635 "data_size": 65536 00:10:06.635 }, 00:10:06.635 { 00:10:06.635 "name": "BaseBdev3", 00:10:06.635 "uuid": "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac", 00:10:06.635 "is_configured": true, 00:10:06.635 "data_offset": 0, 00:10:06.635 "data_size": 65536 00:10:06.635 } 00:10:06.635 ] 00:10:06.635 } 00:10:06.635 } 00:10:06.635 }' 00:10:06.635 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.895 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:06.895 BaseBdev2 00:10:06.895 BaseBdev3' 00:10:06.895 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:06.895 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:06.895 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:06.895 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:06.895 "name": "BaseBdev1", 00:10:06.895 "aliases": [ 00:10:06.895 "1027293a-a3af-46a0-9bd2-ca9b3b06f889" 00:10:06.895 ], 00:10:06.895 "product_name": "Malloc disk", 00:10:06.895 "block_size": 512, 00:10:06.895 "num_blocks": 65536, 00:10:06.895 "uuid": "1027293a-a3af-46a0-9bd2-ca9b3b06f889", 00:10:06.895 "assigned_rate_limits": { 00:10:06.895 "rw_ios_per_sec": 0, 00:10:06.895 "rw_mbytes_per_sec": 0, 00:10:06.895 "r_mbytes_per_sec": 0, 00:10:06.895 "w_mbytes_per_sec": 0 00:10:06.895 }, 00:10:06.895 "claimed": true, 00:10:06.895 "claim_type": "exclusive_write", 00:10:06.895 "zoned": false, 00:10:06.895 "supported_io_types": { 00:10:06.895 "read": true, 00:10:06.895 "write": true, 00:10:06.895 "unmap": true, 00:10:06.895 "flush": true, 00:10:06.895 "reset": true, 00:10:06.895 "nvme_admin": false, 00:10:06.895 "nvme_io": false, 00:10:06.895 "nvme_io_md": false, 00:10:06.895 "write_zeroes": true, 00:10:06.895 "zcopy": true, 00:10:06.895 "get_zone_info": false, 00:10:06.895 "zone_management": false, 00:10:06.895 "zone_append": false, 00:10:06.895 "compare": false, 00:10:06.896 "compare_and_write": false, 00:10:06.896 "abort": true, 00:10:06.896 "seek_hole": false, 00:10:06.896 "seek_data": false, 00:10:06.896 "copy": true, 00:10:06.896 "nvme_iov_md": false 00:10:06.896 }, 00:10:06.896 "memory_domains": [ 00:10:06.896 { 00:10:06.896 "dma_device_id": "system", 00:10:06.896 "dma_device_type": 1 00:10:06.896 }, 00:10:06.896 { 00:10:06.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.896 "dma_device_type": 2 00:10:06.896 } 00:10:06.896 ], 00:10:06.896 "driver_specific": {} 00:10:06.896 }' 00:10:06.896 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:06.896 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.155 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.415 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:07.415 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:07.415 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:07.415 06:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:07.415 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:07.415 "name": "BaseBdev2", 00:10:07.415 "aliases": [ 00:10:07.415 "6d5c2bb4-c53e-428e-b113-4ffce52c4e84" 00:10:07.415 ], 00:10:07.415 "product_name": "Malloc disk", 00:10:07.415 "block_size": 512, 00:10:07.415 "num_blocks": 65536, 00:10:07.415 "uuid": "6d5c2bb4-c53e-428e-b113-4ffce52c4e84", 00:10:07.415 "assigned_rate_limits": { 00:10:07.415 "rw_ios_per_sec": 0, 00:10:07.415 "rw_mbytes_per_sec": 0, 00:10:07.415 "r_mbytes_per_sec": 0, 00:10:07.415 "w_mbytes_per_sec": 0 00:10:07.415 }, 00:10:07.415 "claimed": true, 00:10:07.415 "claim_type": "exclusive_write", 00:10:07.415 "zoned": false, 00:10:07.415 "supported_io_types": { 00:10:07.415 "read": true, 00:10:07.415 "write": true, 00:10:07.415 "unmap": true, 00:10:07.415 "flush": true, 00:10:07.415 "reset": true, 00:10:07.415 "nvme_admin": false, 00:10:07.415 "nvme_io": false, 00:10:07.415 "nvme_io_md": false, 00:10:07.415 "write_zeroes": true, 00:10:07.415 "zcopy": true, 00:10:07.415 "get_zone_info": false, 00:10:07.415 "zone_management": false, 00:10:07.415 "zone_append": false, 00:10:07.415 "compare": false, 00:10:07.415 "compare_and_write": false, 00:10:07.415 "abort": true, 00:10:07.415 "seek_hole": false, 00:10:07.415 "seek_data": false, 00:10:07.415 "copy": true, 00:10:07.415 "nvme_iov_md": false 00:10:07.415 }, 00:10:07.415 "memory_domains": [ 00:10:07.415 { 00:10:07.415 "dma_device_id": "system", 00:10:07.415 "dma_device_type": 1 00:10:07.415 }, 00:10:07.415 { 00:10:07.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.415 "dma_device_type": 2 00:10:07.415 } 00:10:07.415 ], 00:10:07.415 "driver_specific": {} 00:10:07.415 }' 00:10:07.415 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.675 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:07.935 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:07.935 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:07.935 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:07.935 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:07.935 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:07.935 "name": "BaseBdev3", 00:10:07.935 "aliases": [ 00:10:07.935 "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac" 00:10:07.935 ], 00:10:07.935 "product_name": "Malloc disk", 00:10:07.935 "block_size": 512, 00:10:07.935 "num_blocks": 65536, 00:10:07.935 "uuid": "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac", 00:10:07.935 "assigned_rate_limits": { 00:10:07.935 "rw_ios_per_sec": 0, 00:10:07.935 "rw_mbytes_per_sec": 0, 00:10:07.935 "r_mbytes_per_sec": 0, 00:10:07.935 "w_mbytes_per_sec": 0 00:10:07.935 }, 00:10:07.935 "claimed": true, 00:10:07.935 "claim_type": "exclusive_write", 00:10:07.935 "zoned": false, 00:10:07.935 "supported_io_types": { 00:10:07.935 "read": true, 00:10:07.935 "write": true, 00:10:07.935 "unmap": true, 00:10:07.935 "flush": true, 00:10:07.935 "reset": true, 00:10:07.935 "nvme_admin": false, 00:10:07.935 "nvme_io": false, 00:10:07.935 "nvme_io_md": false, 00:10:07.935 "write_zeroes": true, 00:10:07.935 "zcopy": true, 00:10:07.935 "get_zone_info": false, 00:10:07.935 "zone_management": false, 00:10:07.935 "zone_append": false, 00:10:07.935 "compare": false, 00:10:07.935 "compare_and_write": false, 00:10:07.935 "abort": true, 00:10:07.935 "seek_hole": false, 00:10:07.935 "seek_data": false, 00:10:07.935 "copy": true, 00:10:07.935 "nvme_iov_md": false 00:10:07.935 }, 00:10:07.935 "memory_domains": [ 00:10:07.935 { 00:10:07.935 "dma_device_id": "system", 00:10:07.935 "dma_device_type": 1 00:10:07.935 }, 00:10:07.935 { 00:10:07.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.935 "dma_device_type": 2 00:10:07.935 } 00:10:07.935 ], 00:10:07.935 "driver_specific": {} 00:10:07.935 }' 00:10:07.935 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:08.195 06:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:08.454 [2024-08-13 06:05:10.209169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.454 [2024-08-13 06:05:10.209211] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.454 [2024-08-13 06:05:10.209270] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.454 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.713 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.713 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.713 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.713 "name": "Existed_Raid", 00:10:08.713 "uuid": "798acb1b-8ded-4b75-ab68-eccdcd065ecd", 00:10:08.713 "strip_size_kb": 64, 00:10:08.713 "state": "offline", 00:10:08.713 "raid_level": "concat", 00:10:08.713 "superblock": false, 00:10:08.713 "num_base_bdevs": 3, 00:10:08.713 "num_base_bdevs_discovered": 2, 00:10:08.713 "num_base_bdevs_operational": 2, 00:10:08.713 "base_bdevs_list": [ 00:10:08.713 { 00:10:08.713 "name": null, 00:10:08.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.713 "is_configured": false, 00:10:08.713 "data_offset": 0, 00:10:08.713 "data_size": 65536 00:10:08.713 }, 00:10:08.713 { 00:10:08.713 "name": "BaseBdev2", 00:10:08.713 "uuid": "6d5c2bb4-c53e-428e-b113-4ffce52c4e84", 00:10:08.713 "is_configured": true, 00:10:08.713 "data_offset": 0, 00:10:08.713 "data_size": 65536 00:10:08.713 }, 00:10:08.713 { 00:10:08.713 "name": "BaseBdev3", 00:10:08.713 "uuid": "5d7c8e8e-c898-48b1-90ed-d771b2ace1ac", 00:10:08.713 "is_configured": true, 00:10:08.713 "data_offset": 0, 00:10:08.713 "data_size": 65536 00:10:08.713 } 00:10:08.713 ] 00:10:08.713 }' 00:10:08.713 06:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.713 06:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.281 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:09.281 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:09.281 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.281 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:09.541 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:09.541 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.541 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:09.801 [2024-08-13 06:05:11.358754] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.801 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:10.061 [2024-08-13 06:05:11.749136] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.061 [2024-08-13 06:05:11.749196] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:10.061 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:10.061 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:10.061 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.061 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:10.325 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:10.325 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:10.325 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:10.325 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:10.325 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:10.325 06:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.591 BaseBdev2 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:10.591 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.850 [ 00:10:10.850 { 00:10:10.850 "name": "BaseBdev2", 00:10:10.850 "aliases": [ 00:10:10.850 "038f6b35-893d-4ee2-9382-d8177ba8048a" 00:10:10.850 ], 00:10:10.850 "product_name": "Malloc disk", 00:10:10.850 "block_size": 512, 00:10:10.850 "num_blocks": 65536, 00:10:10.850 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:10.851 "assigned_rate_limits": { 00:10:10.851 "rw_ios_per_sec": 0, 00:10:10.851 "rw_mbytes_per_sec": 0, 00:10:10.851 "r_mbytes_per_sec": 0, 00:10:10.851 "w_mbytes_per_sec": 0 00:10:10.851 }, 00:10:10.851 "claimed": false, 00:10:10.851 "zoned": false, 00:10:10.851 "supported_io_types": { 00:10:10.851 "read": true, 00:10:10.851 "write": true, 00:10:10.851 "unmap": true, 00:10:10.851 "flush": true, 00:10:10.851 "reset": true, 00:10:10.851 "nvme_admin": false, 00:10:10.851 "nvme_io": false, 00:10:10.851 "nvme_io_md": false, 00:10:10.851 "write_zeroes": true, 00:10:10.851 "zcopy": true, 00:10:10.851 "get_zone_info": false, 00:10:10.851 "zone_management": false, 00:10:10.851 "zone_append": false, 00:10:10.851 "compare": false, 00:10:10.851 "compare_and_write": false, 00:10:10.851 "abort": true, 00:10:10.851 "seek_hole": false, 00:10:10.851 "seek_data": false, 00:10:10.851 "copy": true, 00:10:10.851 "nvme_iov_md": false 00:10:10.851 }, 00:10:10.851 "memory_domains": [ 00:10:10.851 { 00:10:10.851 "dma_device_id": "system", 00:10:10.851 "dma_device_type": 1 00:10:10.851 }, 00:10:10.851 { 00:10:10.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.851 "dma_device_type": 2 00:10:10.851 } 00:10:10.851 ], 00:10:10.851 "driver_specific": {} 00:10:10.851 } 00:10:10.851 ] 00:10:10.851 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:10.851 06:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:10.851 06:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:10.851 06:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.110 BaseBdev3 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:11.110 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:11.111 06:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.370 [ 00:10:11.370 { 00:10:11.370 "name": "BaseBdev3", 00:10:11.370 "aliases": [ 00:10:11.370 "2e8e7298-1fcc-46ac-a456-576b6ed24ec1" 00:10:11.370 ], 00:10:11.370 "product_name": "Malloc disk", 00:10:11.370 "block_size": 512, 00:10:11.370 "num_blocks": 65536, 00:10:11.370 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:11.370 "assigned_rate_limits": { 00:10:11.370 "rw_ios_per_sec": 0, 00:10:11.370 "rw_mbytes_per_sec": 0, 00:10:11.370 "r_mbytes_per_sec": 0, 00:10:11.370 "w_mbytes_per_sec": 0 00:10:11.370 }, 00:10:11.370 "claimed": false, 00:10:11.370 "zoned": false, 00:10:11.370 "supported_io_types": { 00:10:11.370 "read": true, 00:10:11.370 "write": true, 00:10:11.370 "unmap": true, 00:10:11.370 "flush": true, 00:10:11.370 "reset": true, 00:10:11.370 "nvme_admin": false, 00:10:11.370 "nvme_io": false, 00:10:11.370 "nvme_io_md": false, 00:10:11.370 "write_zeroes": true, 00:10:11.370 "zcopy": true, 00:10:11.370 "get_zone_info": false, 00:10:11.370 "zone_management": false, 00:10:11.370 "zone_append": false, 00:10:11.370 "compare": false, 00:10:11.370 "compare_and_write": false, 00:10:11.370 "abort": true, 00:10:11.370 "seek_hole": false, 00:10:11.370 "seek_data": false, 00:10:11.370 "copy": true, 00:10:11.370 "nvme_iov_md": false 00:10:11.370 }, 00:10:11.370 "memory_domains": [ 00:10:11.370 { 00:10:11.370 "dma_device_id": "system", 00:10:11.370 "dma_device_type": 1 00:10:11.370 }, 00:10:11.370 { 00:10:11.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.370 "dma_device_type": 2 00:10:11.370 } 00:10:11.370 ], 00:10:11.370 "driver_specific": {} 00:10:11.370 } 00:10:11.370 ] 00:10:11.370 06:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:11.370 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:11.370 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:11.370 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:11.629 [2024-08-13 06:05:13.283084] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.629 [2024-08-13 06:05:13.283143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.629 [2024-08-13 06:05:13.283168] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.629 [2024-08-13 06:05:13.285000] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.629 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.889 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.889 "name": "Existed_Raid", 00:10:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.889 "strip_size_kb": 64, 00:10:11.889 "state": "configuring", 00:10:11.889 "raid_level": "concat", 00:10:11.889 "superblock": false, 00:10:11.889 "num_base_bdevs": 3, 00:10:11.889 "num_base_bdevs_discovered": 2, 00:10:11.889 "num_base_bdevs_operational": 3, 00:10:11.889 "base_bdevs_list": [ 00:10:11.889 { 00:10:11.889 "name": "BaseBdev1", 00:10:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.889 "is_configured": false, 00:10:11.889 "data_offset": 0, 00:10:11.889 "data_size": 0 00:10:11.889 }, 00:10:11.889 { 00:10:11.889 "name": "BaseBdev2", 00:10:11.889 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:11.889 "is_configured": true, 00:10:11.889 "data_offset": 0, 00:10:11.889 "data_size": 65536 00:10:11.889 }, 00:10:11.889 { 00:10:11.889 "name": "BaseBdev3", 00:10:11.889 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:11.889 "is_configured": true, 00:10:11.889 "data_offset": 0, 00:10:11.889 "data_size": 65536 00:10:11.889 } 00:10:11.889 ] 00:10:11.889 }' 00:10:11.889 06:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.889 06:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.456 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:12.456 [2024-08-13 06:05:14.241390] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.715 "name": "Existed_Raid", 00:10:12.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.715 "strip_size_kb": 64, 00:10:12.715 "state": "configuring", 00:10:12.715 "raid_level": "concat", 00:10:12.715 "superblock": false, 00:10:12.715 "num_base_bdevs": 3, 00:10:12.715 "num_base_bdevs_discovered": 1, 00:10:12.715 "num_base_bdevs_operational": 3, 00:10:12.715 "base_bdevs_list": [ 00:10:12.715 { 00:10:12.715 "name": "BaseBdev1", 00:10:12.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.715 "is_configured": false, 00:10:12.715 "data_offset": 0, 00:10:12.715 "data_size": 0 00:10:12.715 }, 00:10:12.715 { 00:10:12.715 "name": null, 00:10:12.715 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:12.715 "is_configured": false, 00:10:12.715 "data_offset": 0, 00:10:12.715 "data_size": 65536 00:10:12.715 }, 00:10:12.715 { 00:10:12.715 "name": "BaseBdev3", 00:10:12.715 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:12.715 "is_configured": true, 00:10:12.715 "data_offset": 0, 00:10:12.715 "data_size": 65536 00:10:12.715 } 00:10:12.715 ] 00:10:12.715 }' 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.715 06:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.284 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.284 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.543 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:13.543 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.802 BaseBdev1 00:10:13.802 [2024-08-13 06:05:15.410379] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:13.802 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.061 [ 00:10:14.061 { 00:10:14.061 "name": "BaseBdev1", 00:10:14.061 "aliases": [ 00:10:14.061 "634c722a-4a10-46ec-83ba-a7f6d9e564ae" 00:10:14.061 ], 00:10:14.061 "product_name": "Malloc disk", 00:10:14.061 "block_size": 512, 00:10:14.061 "num_blocks": 65536, 00:10:14.061 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:14.061 "assigned_rate_limits": { 00:10:14.061 "rw_ios_per_sec": 0, 00:10:14.061 "rw_mbytes_per_sec": 0, 00:10:14.061 "r_mbytes_per_sec": 0, 00:10:14.061 "w_mbytes_per_sec": 0 00:10:14.061 }, 00:10:14.061 "claimed": true, 00:10:14.061 "claim_type": "exclusive_write", 00:10:14.061 "zoned": false, 00:10:14.061 "supported_io_types": { 00:10:14.061 "read": true, 00:10:14.061 "write": true, 00:10:14.061 "unmap": true, 00:10:14.061 "flush": true, 00:10:14.061 "reset": true, 00:10:14.061 "nvme_admin": false, 00:10:14.061 "nvme_io": false, 00:10:14.061 "nvme_io_md": false, 00:10:14.061 "write_zeroes": true, 00:10:14.061 "zcopy": true, 00:10:14.061 "get_zone_info": false, 00:10:14.061 "zone_management": false, 00:10:14.061 "zone_append": false, 00:10:14.061 "compare": false, 00:10:14.061 "compare_and_write": false, 00:10:14.061 "abort": true, 00:10:14.061 "seek_hole": false, 00:10:14.061 "seek_data": false, 00:10:14.061 "copy": true, 00:10:14.061 "nvme_iov_md": false 00:10:14.061 }, 00:10:14.061 "memory_domains": [ 00:10:14.061 { 00:10:14.061 "dma_device_id": "system", 00:10:14.061 "dma_device_type": 1 00:10:14.061 }, 00:10:14.061 { 00:10:14.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.061 "dma_device_type": 2 00:10:14.061 } 00:10:14.061 ], 00:10:14.061 "driver_specific": {} 00:10:14.061 } 00:10:14.061 ] 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.061 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.320 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.320 "name": "Existed_Raid", 00:10:14.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.320 "strip_size_kb": 64, 00:10:14.320 "state": "configuring", 00:10:14.320 "raid_level": "concat", 00:10:14.320 "superblock": false, 00:10:14.320 "num_base_bdevs": 3, 00:10:14.320 "num_base_bdevs_discovered": 2, 00:10:14.320 "num_base_bdevs_operational": 3, 00:10:14.320 "base_bdevs_list": [ 00:10:14.320 { 00:10:14.320 "name": "BaseBdev1", 00:10:14.320 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:14.320 "is_configured": true, 00:10:14.320 "data_offset": 0, 00:10:14.320 "data_size": 65536 00:10:14.320 }, 00:10:14.320 { 00:10:14.320 "name": null, 00:10:14.321 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:14.321 "is_configured": false, 00:10:14.321 "data_offset": 0, 00:10:14.321 "data_size": 65536 00:10:14.321 }, 00:10:14.321 { 00:10:14.321 "name": "BaseBdev3", 00:10:14.321 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:14.321 "is_configured": true, 00:10:14.321 "data_offset": 0, 00:10:14.321 "data_size": 65536 00:10:14.321 } 00:10:14.321 ] 00:10:14.321 }' 00:10:14.321 06:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.321 06:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.888 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.888 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:15.147 [2024-08-13 06:05:16.899881] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.147 06:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.407 06:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.407 "name": "Existed_Raid", 00:10:15.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.407 "strip_size_kb": 64, 00:10:15.407 "state": "configuring", 00:10:15.407 "raid_level": "concat", 00:10:15.407 "superblock": false, 00:10:15.407 "num_base_bdevs": 3, 00:10:15.407 "num_base_bdevs_discovered": 1, 00:10:15.407 "num_base_bdevs_operational": 3, 00:10:15.407 "base_bdevs_list": [ 00:10:15.407 { 00:10:15.407 "name": "BaseBdev1", 00:10:15.407 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:15.407 "is_configured": true, 00:10:15.407 "data_offset": 0, 00:10:15.407 "data_size": 65536 00:10:15.407 }, 00:10:15.407 { 00:10:15.407 "name": null, 00:10:15.407 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:15.407 "is_configured": false, 00:10:15.407 "data_offset": 0, 00:10:15.407 "data_size": 65536 00:10:15.407 }, 00:10:15.407 { 00:10:15.407 "name": null, 00:10:15.407 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:15.407 "is_configured": false, 00:10:15.407 "data_offset": 0, 00:10:15.407 "data_size": 65536 00:10:15.407 } 00:10:15.407 ] 00:10:15.407 }' 00:10:15.407 06:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.407 06:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.975 06:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.975 06:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.235 06:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:16.235 06:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:16.494 [2024-08-13 06:05:18.041935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.494 "name": "Existed_Raid", 00:10:16.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.494 "strip_size_kb": 64, 00:10:16.494 "state": "configuring", 00:10:16.494 "raid_level": "concat", 00:10:16.494 "superblock": false, 00:10:16.494 "num_base_bdevs": 3, 00:10:16.494 "num_base_bdevs_discovered": 2, 00:10:16.494 "num_base_bdevs_operational": 3, 00:10:16.494 "base_bdevs_list": [ 00:10:16.494 { 00:10:16.494 "name": "BaseBdev1", 00:10:16.494 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:16.494 "is_configured": true, 00:10:16.494 "data_offset": 0, 00:10:16.494 "data_size": 65536 00:10:16.494 }, 00:10:16.494 { 00:10:16.494 "name": null, 00:10:16.494 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:16.494 "is_configured": false, 00:10:16.494 "data_offset": 0, 00:10:16.494 "data_size": 65536 00:10:16.494 }, 00:10:16.494 { 00:10:16.494 "name": "BaseBdev3", 00:10:16.494 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:16.494 "is_configured": true, 00:10:16.494 "data_offset": 0, 00:10:16.494 "data_size": 65536 00:10:16.494 } 00:10:16.494 ] 00:10:16.494 }' 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.494 06:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.063 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.063 06:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.322 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:17.322 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:17.580 [2024-08-13 06:05:19.208008] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.581 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.840 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.840 "name": "Existed_Raid", 00:10:17.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.840 "strip_size_kb": 64, 00:10:17.840 "state": "configuring", 00:10:17.840 "raid_level": "concat", 00:10:17.840 "superblock": false, 00:10:17.840 "num_base_bdevs": 3, 00:10:17.840 "num_base_bdevs_discovered": 1, 00:10:17.840 "num_base_bdevs_operational": 3, 00:10:17.840 "base_bdevs_list": [ 00:10:17.840 { 00:10:17.840 "name": null, 00:10:17.840 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:17.840 "is_configured": false, 00:10:17.840 "data_offset": 0, 00:10:17.840 "data_size": 65536 00:10:17.840 }, 00:10:17.840 { 00:10:17.840 "name": null, 00:10:17.840 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:17.840 "is_configured": false, 00:10:17.840 "data_offset": 0, 00:10:17.840 "data_size": 65536 00:10:17.840 }, 00:10:17.840 { 00:10:17.840 "name": "BaseBdev3", 00:10:17.840 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:17.840 "is_configured": true, 00:10:17.840 "data_offset": 0, 00:10:17.840 "data_size": 65536 00:10:17.840 } 00:10:17.840 ] 00:10:17.840 }' 00:10:17.840 06:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.840 06:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.408 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.408 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:18.408 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.668 [2024-08-13 06:05:20.364696] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.668 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.927 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.927 "name": "Existed_Raid", 00:10:18.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.927 "strip_size_kb": 64, 00:10:18.927 "state": "configuring", 00:10:18.927 "raid_level": "concat", 00:10:18.927 "superblock": false, 00:10:18.927 "num_base_bdevs": 3, 00:10:18.927 "num_base_bdevs_discovered": 2, 00:10:18.927 "num_base_bdevs_operational": 3, 00:10:18.927 "base_bdevs_list": [ 00:10:18.927 { 00:10:18.927 "name": null, 00:10:18.927 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:18.927 "is_configured": false, 00:10:18.927 "data_offset": 0, 00:10:18.927 "data_size": 65536 00:10:18.927 }, 00:10:18.927 { 00:10:18.927 "name": "BaseBdev2", 00:10:18.927 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:18.927 "is_configured": true, 00:10:18.927 "data_offset": 0, 00:10:18.927 "data_size": 65536 00:10:18.927 }, 00:10:18.927 { 00:10:18.927 "name": "BaseBdev3", 00:10:18.928 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:18.928 "is_configured": true, 00:10:18.928 "data_offset": 0, 00:10:18.928 "data_size": 65536 00:10:18.928 } 00:10:18.928 ] 00:10:18.928 }' 00:10:18.928 06:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.928 06:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.494 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.494 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.753 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:19.753 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.753 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:19.753 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 634c722a-4a10-46ec-83ba-a7f6d9e564ae 00:10:20.013 [2024-08-13 06:05:21.701387] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.013 [2024-08-13 06:05:21.701545] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:20.013 [2024-08-13 06:05:21.701571] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:20.013 [2024-08-13 06:05:21.701828] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:10:20.013 [2024-08-13 06:05:21.701976] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:20.013 [2024-08-13 06:05:21.702018] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:20.013 [2024-08-13 06:05:21.702237] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.013 NewBaseBdev 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:20.013 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.272 06:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.532 [ 00:10:20.532 { 00:10:20.532 "name": "NewBaseBdev", 00:10:20.532 "aliases": [ 00:10:20.532 "634c722a-4a10-46ec-83ba-a7f6d9e564ae" 00:10:20.532 ], 00:10:20.532 "product_name": "Malloc disk", 00:10:20.532 "block_size": 512, 00:10:20.532 "num_blocks": 65536, 00:10:20.532 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:20.532 "assigned_rate_limits": { 00:10:20.532 "rw_ios_per_sec": 0, 00:10:20.532 "rw_mbytes_per_sec": 0, 00:10:20.532 "r_mbytes_per_sec": 0, 00:10:20.532 "w_mbytes_per_sec": 0 00:10:20.532 }, 00:10:20.532 "claimed": true, 00:10:20.532 "claim_type": "exclusive_write", 00:10:20.532 "zoned": false, 00:10:20.532 "supported_io_types": { 00:10:20.532 "read": true, 00:10:20.532 "write": true, 00:10:20.532 "unmap": true, 00:10:20.532 "flush": true, 00:10:20.532 "reset": true, 00:10:20.532 "nvme_admin": false, 00:10:20.532 "nvme_io": false, 00:10:20.532 "nvme_io_md": false, 00:10:20.532 "write_zeroes": true, 00:10:20.532 "zcopy": true, 00:10:20.532 "get_zone_info": false, 00:10:20.532 "zone_management": false, 00:10:20.532 "zone_append": false, 00:10:20.532 "compare": false, 00:10:20.532 "compare_and_write": false, 00:10:20.532 "abort": true, 00:10:20.532 "seek_hole": false, 00:10:20.532 "seek_data": false, 00:10:20.532 "copy": true, 00:10:20.532 "nvme_iov_md": false 00:10:20.532 }, 00:10:20.532 "memory_domains": [ 00:10:20.532 { 00:10:20.532 "dma_device_id": "system", 00:10:20.532 "dma_device_type": 1 00:10:20.532 }, 00:10:20.532 { 00:10:20.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.532 "dma_device_type": 2 00:10:20.532 } 00:10:20.532 ], 00:10:20.532 "driver_specific": {} 00:10:20.532 } 00:10:20.532 ] 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.532 "name": "Existed_Raid", 00:10:20.532 "uuid": "3d9a277e-e1b7-4f50-b8cd-9ab90e35b556", 00:10:20.532 "strip_size_kb": 64, 00:10:20.532 "state": "online", 00:10:20.532 "raid_level": "concat", 00:10:20.532 "superblock": false, 00:10:20.532 "num_base_bdevs": 3, 00:10:20.532 "num_base_bdevs_discovered": 3, 00:10:20.532 "num_base_bdevs_operational": 3, 00:10:20.532 "base_bdevs_list": [ 00:10:20.532 { 00:10:20.532 "name": "NewBaseBdev", 00:10:20.532 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:20.532 "is_configured": true, 00:10:20.532 "data_offset": 0, 00:10:20.532 "data_size": 65536 00:10:20.532 }, 00:10:20.532 { 00:10:20.532 "name": "BaseBdev2", 00:10:20.532 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:20.532 "is_configured": true, 00:10:20.532 "data_offset": 0, 00:10:20.532 "data_size": 65536 00:10:20.532 }, 00:10:20.532 { 00:10:20.532 "name": "BaseBdev3", 00:10:20.532 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:20.532 "is_configured": true, 00:10:20.532 "data_offset": 0, 00:10:20.532 "data_size": 65536 00:10:20.532 } 00:10:20.532 ] 00:10:20.532 }' 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.532 06:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:21.100 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:21.359 [2024-08-13 06:05:22.951583] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.359 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:21.359 "name": "Existed_Raid", 00:10:21.359 "aliases": [ 00:10:21.359 "3d9a277e-e1b7-4f50-b8cd-9ab90e35b556" 00:10:21.359 ], 00:10:21.359 "product_name": "Raid Volume", 00:10:21.359 "block_size": 512, 00:10:21.359 "num_blocks": 196608, 00:10:21.359 "uuid": "3d9a277e-e1b7-4f50-b8cd-9ab90e35b556", 00:10:21.359 "assigned_rate_limits": { 00:10:21.359 "rw_ios_per_sec": 0, 00:10:21.359 "rw_mbytes_per_sec": 0, 00:10:21.359 "r_mbytes_per_sec": 0, 00:10:21.359 "w_mbytes_per_sec": 0 00:10:21.359 }, 00:10:21.359 "claimed": false, 00:10:21.359 "zoned": false, 00:10:21.359 "supported_io_types": { 00:10:21.359 "read": true, 00:10:21.359 "write": true, 00:10:21.359 "unmap": true, 00:10:21.359 "flush": true, 00:10:21.359 "reset": true, 00:10:21.359 "nvme_admin": false, 00:10:21.359 "nvme_io": false, 00:10:21.359 "nvme_io_md": false, 00:10:21.359 "write_zeroes": true, 00:10:21.359 "zcopy": false, 00:10:21.359 "get_zone_info": false, 00:10:21.359 "zone_management": false, 00:10:21.359 "zone_append": false, 00:10:21.359 "compare": false, 00:10:21.359 "compare_and_write": false, 00:10:21.359 "abort": false, 00:10:21.359 "seek_hole": false, 00:10:21.359 "seek_data": false, 00:10:21.359 "copy": false, 00:10:21.359 "nvme_iov_md": false 00:10:21.359 }, 00:10:21.359 "memory_domains": [ 00:10:21.359 { 00:10:21.359 "dma_device_id": "system", 00:10:21.359 "dma_device_type": 1 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.359 "dma_device_type": 2 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "system", 00:10:21.359 "dma_device_type": 1 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.359 "dma_device_type": 2 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "system", 00:10:21.359 "dma_device_type": 1 00:10:21.359 }, 00:10:21.359 { 00:10:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.359 "dma_device_type": 2 00:10:21.359 } 00:10:21.359 ], 00:10:21.359 "driver_specific": { 00:10:21.359 "raid": { 00:10:21.359 "uuid": "3d9a277e-e1b7-4f50-b8cd-9ab90e35b556", 00:10:21.359 "strip_size_kb": 64, 00:10:21.359 "state": "online", 00:10:21.359 "raid_level": "concat", 00:10:21.359 "superblock": false, 00:10:21.359 "num_base_bdevs": 3, 00:10:21.359 "num_base_bdevs_discovered": 3, 00:10:21.359 "num_base_bdevs_operational": 3, 00:10:21.359 "base_bdevs_list": [ 00:10:21.359 { 00:10:21.359 "name": "NewBaseBdev", 00:10:21.359 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:21.359 "is_configured": true, 00:10:21.359 "data_offset": 0, 00:10:21.359 "data_size": 65536 00:10:21.360 }, 00:10:21.360 { 00:10:21.360 "name": "BaseBdev2", 00:10:21.360 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:21.360 "is_configured": true, 00:10:21.360 "data_offset": 0, 00:10:21.360 "data_size": 65536 00:10:21.360 }, 00:10:21.360 { 00:10:21.360 "name": "BaseBdev3", 00:10:21.360 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:21.360 "is_configured": true, 00:10:21.360 "data_offset": 0, 00:10:21.360 "data_size": 65536 00:10:21.360 } 00:10:21.360 ] 00:10:21.360 } 00:10:21.360 } 00:10:21.360 }' 00:10:21.360 06:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.360 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:21.360 BaseBdev2 00:10:21.360 BaseBdev3' 00:10:21.360 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.360 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:21.360 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:21.618 "name": "NewBaseBdev", 00:10:21.618 "aliases": [ 00:10:21.618 "634c722a-4a10-46ec-83ba-a7f6d9e564ae" 00:10:21.618 ], 00:10:21.618 "product_name": "Malloc disk", 00:10:21.618 "block_size": 512, 00:10:21.618 "num_blocks": 65536, 00:10:21.618 "uuid": "634c722a-4a10-46ec-83ba-a7f6d9e564ae", 00:10:21.618 "assigned_rate_limits": { 00:10:21.618 "rw_ios_per_sec": 0, 00:10:21.618 "rw_mbytes_per_sec": 0, 00:10:21.618 "r_mbytes_per_sec": 0, 00:10:21.618 "w_mbytes_per_sec": 0 00:10:21.618 }, 00:10:21.618 "claimed": true, 00:10:21.618 "claim_type": "exclusive_write", 00:10:21.618 "zoned": false, 00:10:21.618 "supported_io_types": { 00:10:21.618 "read": true, 00:10:21.618 "write": true, 00:10:21.618 "unmap": true, 00:10:21.618 "flush": true, 00:10:21.618 "reset": true, 00:10:21.618 "nvme_admin": false, 00:10:21.618 "nvme_io": false, 00:10:21.618 "nvme_io_md": false, 00:10:21.618 "write_zeroes": true, 00:10:21.618 "zcopy": true, 00:10:21.618 "get_zone_info": false, 00:10:21.618 "zone_management": false, 00:10:21.618 "zone_append": false, 00:10:21.618 "compare": false, 00:10:21.618 "compare_and_write": false, 00:10:21.618 "abort": true, 00:10:21.618 "seek_hole": false, 00:10:21.618 "seek_data": false, 00:10:21.618 "copy": true, 00:10:21.618 "nvme_iov_md": false 00:10:21.618 }, 00:10:21.618 "memory_domains": [ 00:10:21.618 { 00:10:21.618 "dma_device_id": "system", 00:10:21.618 "dma_device_type": 1 00:10:21.618 }, 00:10:21.618 { 00:10:21.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.618 "dma_device_type": 2 00:10:21.618 } 00:10:21.618 ], 00:10:21.618 "driver_specific": {} 00:10:21.618 }' 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:21.618 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:21.877 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.134 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.134 "name": "BaseBdev2", 00:10:22.134 "aliases": [ 00:10:22.134 "038f6b35-893d-4ee2-9382-d8177ba8048a" 00:10:22.134 ], 00:10:22.134 "product_name": "Malloc disk", 00:10:22.134 "block_size": 512, 00:10:22.134 "num_blocks": 65536, 00:10:22.134 "uuid": "038f6b35-893d-4ee2-9382-d8177ba8048a", 00:10:22.134 "assigned_rate_limits": { 00:10:22.134 "rw_ios_per_sec": 0, 00:10:22.134 "rw_mbytes_per_sec": 0, 00:10:22.134 "r_mbytes_per_sec": 0, 00:10:22.134 "w_mbytes_per_sec": 0 00:10:22.134 }, 00:10:22.134 "claimed": true, 00:10:22.134 "claim_type": "exclusive_write", 00:10:22.134 "zoned": false, 00:10:22.134 "supported_io_types": { 00:10:22.134 "read": true, 00:10:22.134 "write": true, 00:10:22.134 "unmap": true, 00:10:22.134 "flush": true, 00:10:22.134 "reset": true, 00:10:22.134 "nvme_admin": false, 00:10:22.134 "nvme_io": false, 00:10:22.134 "nvme_io_md": false, 00:10:22.134 "write_zeroes": true, 00:10:22.134 "zcopy": true, 00:10:22.134 "get_zone_info": false, 00:10:22.134 "zone_management": false, 00:10:22.134 "zone_append": false, 00:10:22.134 "compare": false, 00:10:22.134 "compare_and_write": false, 00:10:22.134 "abort": true, 00:10:22.134 "seek_hole": false, 00:10:22.134 "seek_data": false, 00:10:22.134 "copy": true, 00:10:22.134 "nvme_iov_md": false 00:10:22.134 }, 00:10:22.134 "memory_domains": [ 00:10:22.134 { 00:10:22.134 "dma_device_id": "system", 00:10:22.134 "dma_device_type": 1 00:10:22.134 }, 00:10:22.134 { 00:10:22.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.134 "dma_device_type": 2 00:10:22.134 } 00:10:22.134 ], 00:10:22.134 "driver_specific": {} 00:10:22.134 }' 00:10:22.134 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.134 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.134 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.134 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.134 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.393 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.393 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.393 06:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.393 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:22.652 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.652 "name": "BaseBdev3", 00:10:22.652 "aliases": [ 00:10:22.652 "2e8e7298-1fcc-46ac-a456-576b6ed24ec1" 00:10:22.652 ], 00:10:22.652 "product_name": "Malloc disk", 00:10:22.652 "block_size": 512, 00:10:22.652 "num_blocks": 65536, 00:10:22.652 "uuid": "2e8e7298-1fcc-46ac-a456-576b6ed24ec1", 00:10:22.652 "assigned_rate_limits": { 00:10:22.652 "rw_ios_per_sec": 0, 00:10:22.652 "rw_mbytes_per_sec": 0, 00:10:22.652 "r_mbytes_per_sec": 0, 00:10:22.652 "w_mbytes_per_sec": 0 00:10:22.652 }, 00:10:22.652 "claimed": true, 00:10:22.652 "claim_type": "exclusive_write", 00:10:22.652 "zoned": false, 00:10:22.652 "supported_io_types": { 00:10:22.652 "read": true, 00:10:22.652 "write": true, 00:10:22.652 "unmap": true, 00:10:22.652 "flush": true, 00:10:22.652 "reset": true, 00:10:22.652 "nvme_admin": false, 00:10:22.652 "nvme_io": false, 00:10:22.652 "nvme_io_md": false, 00:10:22.652 "write_zeroes": true, 00:10:22.652 "zcopy": true, 00:10:22.652 "get_zone_info": false, 00:10:22.652 "zone_management": false, 00:10:22.652 "zone_append": false, 00:10:22.652 "compare": false, 00:10:22.652 "compare_and_write": false, 00:10:22.652 "abort": true, 00:10:22.652 "seek_hole": false, 00:10:22.652 "seek_data": false, 00:10:22.652 "copy": true, 00:10:22.652 "nvme_iov_md": false 00:10:22.652 }, 00:10:22.652 "memory_domains": [ 00:10:22.652 { 00:10:22.652 "dma_device_id": "system", 00:10:22.652 "dma_device_type": 1 00:10:22.652 }, 00:10:22.652 { 00:10:22.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.652 "dma_device_type": 2 00:10:22.652 } 00:10:22.652 ], 00:10:22.652 "driver_specific": {} 00:10:22.652 }' 00:10:22.652 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.652 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.652 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.652 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.954 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:23.228 [2024-08-13 06:05:24.864076] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.228 [2024-08-13 06:05:24.864179] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.228 [2024-08-13 06:05:24.864311] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.228 [2024-08-13 06:05:24.864385] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.228 [2024-08-13 06:05:24.864426] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 77756 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 77756 ']' 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 77756 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77756 00:10:23.228 killing process with pid 77756 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:23.228 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:23.229 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77756' 00:10:23.229 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 77756 00:10:23.229 [2024-08-13 06:05:24.922790] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.229 06:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 77756 00:10:23.229 [2024-08-13 06:05:24.952929] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:23.488 00:10:23.488 real 0m24.389s 00:10:23.488 user 0m45.376s 00:10:23.488 sys 0m3.621s 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.488 ************************************ 00:10:23.488 END TEST raid_state_function_test 00:10:23.488 ************************************ 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.488 06:05:25 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:23.488 06:05:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:23.488 06:05:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.488 06:05:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.488 ************************************ 00:10:23.488 START TEST raid_state_function_test_sb 00:10:23.488 ************************************ 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.488 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:23.489 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:23.749 Process raid pid: 78653 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=78653 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 78653' 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 78653 /var/tmp/spdk-raid.sock 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 78653 ']' 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:23.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:23.749 06:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.749 [2024-08-13 06:05:25.356637] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:10:23.749 [2024-08-13 06:05:25.356817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.749 [2024-08-13 06:05:25.504718] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.008 [2024-08-13 06:05:25.550533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.008 [2024-08-13 06:05:25.592661] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.008 [2024-08-13 06:05:25.592775] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:24.577 [2024-08-13 06:05:26.336372] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.577 [2024-08-13 06:05:26.336518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.577 [2024-08-13 06:05:26.336568] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.577 [2024-08-13 06:05:26.336592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.577 [2024-08-13 06:05:26.336615] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.577 [2024-08-13 06:05:26.336635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.577 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.837 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.837 "name": "Existed_Raid", 00:10:24.837 "uuid": "1dbe41e1-2005-4591-9e55-cbe42df7ad17", 00:10:24.837 "strip_size_kb": 64, 00:10:24.837 "state": "configuring", 00:10:24.837 "raid_level": "concat", 00:10:24.837 "superblock": true, 00:10:24.837 "num_base_bdevs": 3, 00:10:24.837 "num_base_bdevs_discovered": 0, 00:10:24.837 "num_base_bdevs_operational": 3, 00:10:24.837 "base_bdevs_list": [ 00:10:24.837 { 00:10:24.837 "name": "BaseBdev1", 00:10:24.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.837 "is_configured": false, 00:10:24.837 "data_offset": 0, 00:10:24.837 "data_size": 0 00:10:24.837 }, 00:10:24.837 { 00:10:24.837 "name": "BaseBdev2", 00:10:24.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.837 "is_configured": false, 00:10:24.837 "data_offset": 0, 00:10:24.837 "data_size": 0 00:10:24.837 }, 00:10:24.837 { 00:10:24.837 "name": "BaseBdev3", 00:10:24.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.837 "is_configured": false, 00:10:24.837 "data_offset": 0, 00:10:24.837 "data_size": 0 00:10:24.837 } 00:10:24.837 ] 00:10:24.837 }' 00:10:24.837 06:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.837 06:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.406 06:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:25.666 [2024-08-13 06:05:27.238659] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.666 [2024-08-13 06:05:27.238794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:25.666 06:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:25.666 [2024-08-13 06:05:27.438345] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.666 [2024-08-13 06:05:27.438452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.666 [2024-08-13 06:05:27.438501] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.666 [2024-08-13 06:05:27.438526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.666 [2024-08-13 06:05:27.438559] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.666 [2024-08-13 06:05:27.438605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.926 [2024-08-13 06:05:27.626772] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.926 BaseBdev1 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:25.926 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:26.186 06:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.445 [ 00:10:26.445 { 00:10:26.445 "name": "BaseBdev1", 00:10:26.445 "aliases": [ 00:10:26.445 "a78eef95-d639-4108-990b-2b1b74c5747c" 00:10:26.445 ], 00:10:26.445 "product_name": "Malloc disk", 00:10:26.445 "block_size": 512, 00:10:26.445 "num_blocks": 65536, 00:10:26.445 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:26.445 "assigned_rate_limits": { 00:10:26.445 "rw_ios_per_sec": 0, 00:10:26.445 "rw_mbytes_per_sec": 0, 00:10:26.445 "r_mbytes_per_sec": 0, 00:10:26.445 "w_mbytes_per_sec": 0 00:10:26.445 }, 00:10:26.445 "claimed": true, 00:10:26.445 "claim_type": "exclusive_write", 00:10:26.445 "zoned": false, 00:10:26.445 "supported_io_types": { 00:10:26.445 "read": true, 00:10:26.445 "write": true, 00:10:26.445 "unmap": true, 00:10:26.445 "flush": true, 00:10:26.445 "reset": true, 00:10:26.445 "nvme_admin": false, 00:10:26.445 "nvme_io": false, 00:10:26.445 "nvme_io_md": false, 00:10:26.445 "write_zeroes": true, 00:10:26.445 "zcopy": true, 00:10:26.445 "get_zone_info": false, 00:10:26.445 "zone_management": false, 00:10:26.445 "zone_append": false, 00:10:26.445 "compare": false, 00:10:26.445 "compare_and_write": false, 00:10:26.445 "abort": true, 00:10:26.445 "seek_hole": false, 00:10:26.445 "seek_data": false, 00:10:26.445 "copy": true, 00:10:26.445 "nvme_iov_md": false 00:10:26.445 }, 00:10:26.445 "memory_domains": [ 00:10:26.445 { 00:10:26.445 "dma_device_id": "system", 00:10:26.445 "dma_device_type": 1 00:10:26.445 }, 00:10:26.445 { 00:10:26.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.445 "dma_device_type": 2 00:10:26.445 } 00:10:26.445 ], 00:10:26.445 "driver_specific": {} 00:10:26.445 } 00:10:26.445 ] 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.445 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.704 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:26.704 "name": "Existed_Raid", 00:10:26.704 "uuid": "b6173d8e-f0e4-4a5c-9339-17a8fe28ce0d", 00:10:26.704 "strip_size_kb": 64, 00:10:26.704 "state": "configuring", 00:10:26.704 "raid_level": "concat", 00:10:26.704 "superblock": true, 00:10:26.704 "num_base_bdevs": 3, 00:10:26.704 "num_base_bdevs_discovered": 1, 00:10:26.704 "num_base_bdevs_operational": 3, 00:10:26.704 "base_bdevs_list": [ 00:10:26.704 { 00:10:26.704 "name": "BaseBdev1", 00:10:26.704 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:26.704 "is_configured": true, 00:10:26.704 "data_offset": 2048, 00:10:26.704 "data_size": 63488 00:10:26.704 }, 00:10:26.704 { 00:10:26.704 "name": "BaseBdev2", 00:10:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.704 "is_configured": false, 00:10:26.704 "data_offset": 0, 00:10:26.704 "data_size": 0 00:10:26.704 }, 00:10:26.704 { 00:10:26.704 "name": "BaseBdev3", 00:10:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.704 "is_configured": false, 00:10:26.704 "data_offset": 0, 00:10:26.704 "data_size": 0 00:10:26.704 } 00:10:26.704 ] 00:10:26.704 }' 00:10:26.704 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:26.704 06:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.273 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:27.273 [2024-08-13 06:05:28.980485] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.273 [2024-08-13 06:05:28.980566] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:27.273 06:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:27.533 [2024-08-13 06:05:29.176220] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.533 [2024-08-13 06:05:29.178101] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.533 [2024-08-13 06:05:29.178143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.533 [2024-08-13 06:05:29.178165] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.533 [2024-08-13 06:05:29.178172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.533 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.792 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.792 "name": "Existed_Raid", 00:10:27.792 "uuid": "1e821d9d-7421-414f-9130-720d813cb52e", 00:10:27.792 "strip_size_kb": 64, 00:10:27.792 "state": "configuring", 00:10:27.792 "raid_level": "concat", 00:10:27.792 "superblock": true, 00:10:27.792 "num_base_bdevs": 3, 00:10:27.792 "num_base_bdevs_discovered": 1, 00:10:27.792 "num_base_bdevs_operational": 3, 00:10:27.792 "base_bdevs_list": [ 00:10:27.792 { 00:10:27.792 "name": "BaseBdev1", 00:10:27.792 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:27.792 "is_configured": true, 00:10:27.792 "data_offset": 2048, 00:10:27.792 "data_size": 63488 00:10:27.792 }, 00:10:27.792 { 00:10:27.792 "name": "BaseBdev2", 00:10:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.792 "is_configured": false, 00:10:27.792 "data_offset": 0, 00:10:27.792 "data_size": 0 00:10:27.792 }, 00:10:27.792 { 00:10:27.792 "name": "BaseBdev3", 00:10:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.792 "is_configured": false, 00:10:27.792 "data_offset": 0, 00:10:27.792 "data_size": 0 00:10:27.792 } 00:10:27.792 ] 00:10:27.792 }' 00:10:27.792 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.792 06:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.360 06:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.360 [2024-08-13 06:05:30.125787] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.360 BaseBdev2 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:28.360 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:28.620 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.881 [ 00:10:28.881 { 00:10:28.881 "name": "BaseBdev2", 00:10:28.881 "aliases": [ 00:10:28.881 "f95d3c77-55b1-4df1-8428-3d09390490db" 00:10:28.881 ], 00:10:28.881 "product_name": "Malloc disk", 00:10:28.881 "block_size": 512, 00:10:28.881 "num_blocks": 65536, 00:10:28.881 "uuid": "f95d3c77-55b1-4df1-8428-3d09390490db", 00:10:28.881 "assigned_rate_limits": { 00:10:28.881 "rw_ios_per_sec": 0, 00:10:28.881 "rw_mbytes_per_sec": 0, 00:10:28.881 "r_mbytes_per_sec": 0, 00:10:28.881 "w_mbytes_per_sec": 0 00:10:28.881 }, 00:10:28.881 "claimed": true, 00:10:28.881 "claim_type": "exclusive_write", 00:10:28.881 "zoned": false, 00:10:28.881 "supported_io_types": { 00:10:28.881 "read": true, 00:10:28.881 "write": true, 00:10:28.881 "unmap": true, 00:10:28.881 "flush": true, 00:10:28.881 "reset": true, 00:10:28.881 "nvme_admin": false, 00:10:28.881 "nvme_io": false, 00:10:28.881 "nvme_io_md": false, 00:10:28.881 "write_zeroes": true, 00:10:28.881 "zcopy": true, 00:10:28.881 "get_zone_info": false, 00:10:28.881 "zone_management": false, 00:10:28.881 "zone_append": false, 00:10:28.881 "compare": false, 00:10:28.881 "compare_and_write": false, 00:10:28.881 "abort": true, 00:10:28.881 "seek_hole": false, 00:10:28.881 "seek_data": false, 00:10:28.881 "copy": true, 00:10:28.881 "nvme_iov_md": false 00:10:28.881 }, 00:10:28.881 "memory_domains": [ 00:10:28.881 { 00:10:28.881 "dma_device_id": "system", 00:10:28.881 "dma_device_type": 1 00:10:28.881 }, 00:10:28.881 { 00:10:28.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.881 "dma_device_type": 2 00:10:28.881 } 00:10:28.881 ], 00:10:28.881 "driver_specific": {} 00:10:28.881 } 00:10:28.881 ] 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.881 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.140 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.140 "name": "Existed_Raid", 00:10:29.140 "uuid": "1e821d9d-7421-414f-9130-720d813cb52e", 00:10:29.140 "strip_size_kb": 64, 00:10:29.140 "state": "configuring", 00:10:29.140 "raid_level": "concat", 00:10:29.140 "superblock": true, 00:10:29.140 "num_base_bdevs": 3, 00:10:29.140 "num_base_bdevs_discovered": 2, 00:10:29.140 "num_base_bdevs_operational": 3, 00:10:29.140 "base_bdevs_list": [ 00:10:29.140 { 00:10:29.140 "name": "BaseBdev1", 00:10:29.140 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:29.140 "is_configured": true, 00:10:29.140 "data_offset": 2048, 00:10:29.140 "data_size": 63488 00:10:29.140 }, 00:10:29.140 { 00:10:29.140 "name": "BaseBdev2", 00:10:29.140 "uuid": "f95d3c77-55b1-4df1-8428-3d09390490db", 00:10:29.140 "is_configured": true, 00:10:29.140 "data_offset": 2048, 00:10:29.140 "data_size": 63488 00:10:29.140 }, 00:10:29.140 { 00:10:29.140 "name": "BaseBdev3", 00:10:29.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.140 "is_configured": false, 00:10:29.140 "data_offset": 0, 00:10:29.140 "data_size": 0 00:10:29.140 } 00:10:29.140 ] 00:10:29.140 }' 00:10:29.140 06:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.140 06:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.709 [2024-08-13 06:05:31.442482] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.709 [2024-08-13 06:05:31.442803] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:29.709 [2024-08-13 06:05:31.442842] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:29.709 [2024-08-13 06:05:31.443184] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:29.709 [2024-08-13 06:05:31.443354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:29.709 [2024-08-13 06:05:31.443403] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:29.709 [2024-08-13 06:05:31.443562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.709 BaseBdev3 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:29.709 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:29.968 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.228 [ 00:10:30.228 { 00:10:30.228 "name": "BaseBdev3", 00:10:30.228 "aliases": [ 00:10:30.228 "0c909913-e1e9-48bc-bf3a-1126f2a4503f" 00:10:30.228 ], 00:10:30.228 "product_name": "Malloc disk", 00:10:30.228 "block_size": 512, 00:10:30.228 "num_blocks": 65536, 00:10:30.228 "uuid": "0c909913-e1e9-48bc-bf3a-1126f2a4503f", 00:10:30.228 "assigned_rate_limits": { 00:10:30.228 "rw_ios_per_sec": 0, 00:10:30.228 "rw_mbytes_per_sec": 0, 00:10:30.228 "r_mbytes_per_sec": 0, 00:10:30.228 "w_mbytes_per_sec": 0 00:10:30.228 }, 00:10:30.228 "claimed": true, 00:10:30.228 "claim_type": "exclusive_write", 00:10:30.228 "zoned": false, 00:10:30.228 "supported_io_types": { 00:10:30.228 "read": true, 00:10:30.228 "write": true, 00:10:30.228 "unmap": true, 00:10:30.228 "flush": true, 00:10:30.228 "reset": true, 00:10:30.228 "nvme_admin": false, 00:10:30.228 "nvme_io": false, 00:10:30.228 "nvme_io_md": false, 00:10:30.228 "write_zeroes": true, 00:10:30.228 "zcopy": true, 00:10:30.228 "get_zone_info": false, 00:10:30.228 "zone_management": false, 00:10:30.228 "zone_append": false, 00:10:30.228 "compare": false, 00:10:30.228 "compare_and_write": false, 00:10:30.228 "abort": true, 00:10:30.228 "seek_hole": false, 00:10:30.228 "seek_data": false, 00:10:30.228 "copy": true, 00:10:30.228 "nvme_iov_md": false 00:10:30.228 }, 00:10:30.228 "memory_domains": [ 00:10:30.228 { 00:10:30.228 "dma_device_id": "system", 00:10:30.228 "dma_device_type": 1 00:10:30.228 }, 00:10:30.228 { 00:10:30.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.228 "dma_device_type": 2 00:10:30.228 } 00:10:30.228 ], 00:10:30.228 "driver_specific": {} 00:10:30.228 } 00:10:30.228 ] 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.228 06:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.487 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.487 "name": "Existed_Raid", 00:10:30.487 "uuid": "1e821d9d-7421-414f-9130-720d813cb52e", 00:10:30.487 "strip_size_kb": 64, 00:10:30.487 "state": "online", 00:10:30.487 "raid_level": "concat", 00:10:30.487 "superblock": true, 00:10:30.487 "num_base_bdevs": 3, 00:10:30.487 "num_base_bdevs_discovered": 3, 00:10:30.487 "num_base_bdevs_operational": 3, 00:10:30.487 "base_bdevs_list": [ 00:10:30.487 { 00:10:30.487 "name": "BaseBdev1", 00:10:30.487 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:30.487 "is_configured": true, 00:10:30.487 "data_offset": 2048, 00:10:30.487 "data_size": 63488 00:10:30.487 }, 00:10:30.487 { 00:10:30.487 "name": "BaseBdev2", 00:10:30.488 "uuid": "f95d3c77-55b1-4df1-8428-3d09390490db", 00:10:30.488 "is_configured": true, 00:10:30.488 "data_offset": 2048, 00:10:30.488 "data_size": 63488 00:10:30.488 }, 00:10:30.488 { 00:10:30.488 "name": "BaseBdev3", 00:10:30.488 "uuid": "0c909913-e1e9-48bc-bf3a-1126f2a4503f", 00:10:30.488 "is_configured": true, 00:10:30.488 "data_offset": 2048, 00:10:30.488 "data_size": 63488 00:10:30.488 } 00:10:30.488 ] 00:10:30.488 }' 00:10:30.488 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.488 06:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:31.055 [2024-08-13 06:05:32.800543] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.055 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:31.055 "name": "Existed_Raid", 00:10:31.055 "aliases": [ 00:10:31.055 "1e821d9d-7421-414f-9130-720d813cb52e" 00:10:31.055 ], 00:10:31.055 "product_name": "Raid Volume", 00:10:31.055 "block_size": 512, 00:10:31.055 "num_blocks": 190464, 00:10:31.055 "uuid": "1e821d9d-7421-414f-9130-720d813cb52e", 00:10:31.055 "assigned_rate_limits": { 00:10:31.055 "rw_ios_per_sec": 0, 00:10:31.055 "rw_mbytes_per_sec": 0, 00:10:31.055 "r_mbytes_per_sec": 0, 00:10:31.055 "w_mbytes_per_sec": 0 00:10:31.055 }, 00:10:31.055 "claimed": false, 00:10:31.055 "zoned": false, 00:10:31.055 "supported_io_types": { 00:10:31.055 "read": true, 00:10:31.055 "write": true, 00:10:31.055 "unmap": true, 00:10:31.055 "flush": true, 00:10:31.055 "reset": true, 00:10:31.055 "nvme_admin": false, 00:10:31.055 "nvme_io": false, 00:10:31.056 "nvme_io_md": false, 00:10:31.056 "write_zeroes": true, 00:10:31.056 "zcopy": false, 00:10:31.056 "get_zone_info": false, 00:10:31.056 "zone_management": false, 00:10:31.056 "zone_append": false, 00:10:31.056 "compare": false, 00:10:31.056 "compare_and_write": false, 00:10:31.056 "abort": false, 00:10:31.056 "seek_hole": false, 00:10:31.056 "seek_data": false, 00:10:31.056 "copy": false, 00:10:31.056 "nvme_iov_md": false 00:10:31.056 }, 00:10:31.056 "memory_domains": [ 00:10:31.056 { 00:10:31.056 "dma_device_id": "system", 00:10:31.056 "dma_device_type": 1 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.056 "dma_device_type": 2 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "dma_device_id": "system", 00:10:31.056 "dma_device_type": 1 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.056 "dma_device_type": 2 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "dma_device_id": "system", 00:10:31.056 "dma_device_type": 1 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.056 "dma_device_type": 2 00:10:31.056 } 00:10:31.056 ], 00:10:31.056 "driver_specific": { 00:10:31.056 "raid": { 00:10:31.056 "uuid": "1e821d9d-7421-414f-9130-720d813cb52e", 00:10:31.056 "strip_size_kb": 64, 00:10:31.056 "state": "online", 00:10:31.056 "raid_level": "concat", 00:10:31.056 "superblock": true, 00:10:31.056 "num_base_bdevs": 3, 00:10:31.056 "num_base_bdevs_discovered": 3, 00:10:31.056 "num_base_bdevs_operational": 3, 00:10:31.056 "base_bdevs_list": [ 00:10:31.056 { 00:10:31.056 "name": "BaseBdev1", 00:10:31.056 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:31.056 "is_configured": true, 00:10:31.056 "data_offset": 2048, 00:10:31.056 "data_size": 63488 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "name": "BaseBdev2", 00:10:31.056 "uuid": "f95d3c77-55b1-4df1-8428-3d09390490db", 00:10:31.056 "is_configured": true, 00:10:31.056 "data_offset": 2048, 00:10:31.056 "data_size": 63488 00:10:31.056 }, 00:10:31.056 { 00:10:31.056 "name": "BaseBdev3", 00:10:31.056 "uuid": "0c909913-e1e9-48bc-bf3a-1126f2a4503f", 00:10:31.056 "is_configured": true, 00:10:31.056 "data_offset": 2048, 00:10:31.056 "data_size": 63488 00:10:31.056 } 00:10:31.056 ] 00:10:31.056 } 00:10:31.056 } 00:10:31.056 }' 00:10:31.056 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.315 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:31.315 BaseBdev2 00:10:31.315 BaseBdev3' 00:10:31.315 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.315 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.315 06:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:31.315 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.315 "name": "BaseBdev1", 00:10:31.315 "aliases": [ 00:10:31.315 "a78eef95-d639-4108-990b-2b1b74c5747c" 00:10:31.315 ], 00:10:31.315 "product_name": "Malloc disk", 00:10:31.315 "block_size": 512, 00:10:31.315 "num_blocks": 65536, 00:10:31.315 "uuid": "a78eef95-d639-4108-990b-2b1b74c5747c", 00:10:31.315 "assigned_rate_limits": { 00:10:31.315 "rw_ios_per_sec": 0, 00:10:31.315 "rw_mbytes_per_sec": 0, 00:10:31.315 "r_mbytes_per_sec": 0, 00:10:31.315 "w_mbytes_per_sec": 0 00:10:31.315 }, 00:10:31.315 "claimed": true, 00:10:31.315 "claim_type": "exclusive_write", 00:10:31.315 "zoned": false, 00:10:31.315 "supported_io_types": { 00:10:31.315 "read": true, 00:10:31.315 "write": true, 00:10:31.315 "unmap": true, 00:10:31.315 "flush": true, 00:10:31.315 "reset": true, 00:10:31.315 "nvme_admin": false, 00:10:31.315 "nvme_io": false, 00:10:31.315 "nvme_io_md": false, 00:10:31.315 "write_zeroes": true, 00:10:31.315 "zcopy": true, 00:10:31.315 "get_zone_info": false, 00:10:31.315 "zone_management": false, 00:10:31.315 "zone_append": false, 00:10:31.315 "compare": false, 00:10:31.315 "compare_and_write": false, 00:10:31.315 "abort": true, 00:10:31.315 "seek_hole": false, 00:10:31.315 "seek_data": false, 00:10:31.315 "copy": true, 00:10:31.315 "nvme_iov_md": false 00:10:31.315 }, 00:10:31.315 "memory_domains": [ 00:10:31.315 { 00:10:31.315 "dma_device_id": "system", 00:10:31.315 "dma_device_type": 1 00:10:31.315 }, 00:10:31.315 { 00:10:31.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.315 "dma_device_type": 2 00:10:31.315 } 00:10:31.315 ], 00:10:31.315 "driver_specific": {} 00:10:31.315 }' 00:10:31.315 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.589 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.877 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.877 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.878 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.878 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:31.878 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.878 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.878 "name": "BaseBdev2", 00:10:31.878 "aliases": [ 00:10:31.878 "f95d3c77-55b1-4df1-8428-3d09390490db" 00:10:31.878 ], 00:10:31.878 "product_name": "Malloc disk", 00:10:31.878 "block_size": 512, 00:10:31.878 "num_blocks": 65536, 00:10:31.878 "uuid": "f95d3c77-55b1-4df1-8428-3d09390490db", 00:10:31.878 "assigned_rate_limits": { 00:10:31.878 "rw_ios_per_sec": 0, 00:10:31.878 "rw_mbytes_per_sec": 0, 00:10:31.878 "r_mbytes_per_sec": 0, 00:10:31.878 "w_mbytes_per_sec": 0 00:10:31.878 }, 00:10:31.878 "claimed": true, 00:10:31.878 "claim_type": "exclusive_write", 00:10:31.878 "zoned": false, 00:10:31.878 "supported_io_types": { 00:10:31.878 "read": true, 00:10:31.878 "write": true, 00:10:31.878 "unmap": true, 00:10:31.878 "flush": true, 00:10:31.878 "reset": true, 00:10:31.878 "nvme_admin": false, 00:10:31.878 "nvme_io": false, 00:10:31.878 "nvme_io_md": false, 00:10:31.878 "write_zeroes": true, 00:10:31.878 "zcopy": true, 00:10:31.878 "get_zone_info": false, 00:10:31.878 "zone_management": false, 00:10:31.878 "zone_append": false, 00:10:31.878 "compare": false, 00:10:31.878 "compare_and_write": false, 00:10:31.878 "abort": true, 00:10:31.878 "seek_hole": false, 00:10:31.878 "seek_data": false, 00:10:31.878 "copy": true, 00:10:31.878 "nvme_iov_md": false 00:10:31.878 }, 00:10:31.878 "memory_domains": [ 00:10:31.878 { 00:10:31.878 "dma_device_id": "system", 00:10:31.878 "dma_device_type": 1 00:10:31.878 }, 00:10:31.878 { 00:10:31.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.878 "dma_device_type": 2 00:10:31.878 } 00:10:31.878 ], 00:10:31.878 "driver_specific": {} 00:10:31.878 }' 00:10:31.878 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.138 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.399 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.399 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:32.399 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:32.399 06:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:32.399 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:32.399 "name": "BaseBdev3", 00:10:32.399 "aliases": [ 00:10:32.399 "0c909913-e1e9-48bc-bf3a-1126f2a4503f" 00:10:32.399 ], 00:10:32.399 "product_name": "Malloc disk", 00:10:32.399 "block_size": 512, 00:10:32.399 "num_blocks": 65536, 00:10:32.399 "uuid": "0c909913-e1e9-48bc-bf3a-1126f2a4503f", 00:10:32.399 "assigned_rate_limits": { 00:10:32.399 "rw_ios_per_sec": 0, 00:10:32.399 "rw_mbytes_per_sec": 0, 00:10:32.399 "r_mbytes_per_sec": 0, 00:10:32.399 "w_mbytes_per_sec": 0 00:10:32.399 }, 00:10:32.399 "claimed": true, 00:10:32.399 "claim_type": "exclusive_write", 00:10:32.399 "zoned": false, 00:10:32.399 "supported_io_types": { 00:10:32.399 "read": true, 00:10:32.399 "write": true, 00:10:32.399 "unmap": true, 00:10:32.399 "flush": true, 00:10:32.399 "reset": true, 00:10:32.399 "nvme_admin": false, 00:10:32.399 "nvme_io": false, 00:10:32.399 "nvme_io_md": false, 00:10:32.399 "write_zeroes": true, 00:10:32.399 "zcopy": true, 00:10:32.399 "get_zone_info": false, 00:10:32.399 "zone_management": false, 00:10:32.399 "zone_append": false, 00:10:32.399 "compare": false, 00:10:32.399 "compare_and_write": false, 00:10:32.399 "abort": true, 00:10:32.399 "seek_hole": false, 00:10:32.399 "seek_data": false, 00:10:32.399 "copy": true, 00:10:32.399 "nvme_iov_md": false 00:10:32.399 }, 00:10:32.399 "memory_domains": [ 00:10:32.399 { 00:10:32.399 "dma_device_id": "system", 00:10:32.399 "dma_device_type": 1 00:10:32.399 }, 00:10:32.399 { 00:10:32.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.399 "dma_device_type": 2 00:10:32.399 } 00:10:32.399 ], 00:10:32.399 "driver_specific": {} 00:10:32.399 }' 00:10:32.399 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.658 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:32.916 [2024-08-13 06:05:34.661122] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.916 [2024-08-13 06:05:34.661232] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.916 [2024-08-13 06:05:34.661314] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:32.916 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.917 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.175 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.175 "name": "Existed_Raid", 00:10:33.175 "uuid": "1e821d9d-7421-414f-9130-720d813cb52e", 00:10:33.175 "strip_size_kb": 64, 00:10:33.175 "state": "offline", 00:10:33.175 "raid_level": "concat", 00:10:33.175 "superblock": true, 00:10:33.175 "num_base_bdevs": 3, 00:10:33.175 "num_base_bdevs_discovered": 2, 00:10:33.175 "num_base_bdevs_operational": 2, 00:10:33.175 "base_bdevs_list": [ 00:10:33.175 { 00:10:33.175 "name": null, 00:10:33.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.175 "is_configured": false, 00:10:33.175 "data_offset": 2048, 00:10:33.175 "data_size": 63488 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "name": "BaseBdev2", 00:10:33.175 "uuid": "f95d3c77-55b1-4df1-8428-3d09390490db", 00:10:33.175 "is_configured": true, 00:10:33.175 "data_offset": 2048, 00:10:33.175 "data_size": 63488 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "name": "BaseBdev3", 00:10:33.175 "uuid": "0c909913-e1e9-48bc-bf3a-1126f2a4503f", 00:10:33.175 "is_configured": true, 00:10:33.175 "data_offset": 2048, 00:10:33.175 "data_size": 63488 00:10:33.175 } 00:10:33.175 ] 00:10:33.175 }' 00:10:33.175 06:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.175 06:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.742 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:33.742 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:33.742 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.742 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:34.000 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:34.000 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.000 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:34.259 [2024-08-13 06:05:35.834504] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.259 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:34.259 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:34.259 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.259 06:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:34.518 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:34.518 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.518 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:34.518 [2024-08-13 06:05:36.276729] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.518 [2024-08-13 06:05:36.276788] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:34.777 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.035 BaseBdev2 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:35.035 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.294 06:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.553 [ 00:10:35.553 { 00:10:35.553 "name": "BaseBdev2", 00:10:35.553 "aliases": [ 00:10:35.553 "653a9a8e-56e8-458b-84cf-810b43b953dc" 00:10:35.553 ], 00:10:35.553 "product_name": "Malloc disk", 00:10:35.553 "block_size": 512, 00:10:35.553 "num_blocks": 65536, 00:10:35.553 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:35.553 "assigned_rate_limits": { 00:10:35.553 "rw_ios_per_sec": 0, 00:10:35.553 "rw_mbytes_per_sec": 0, 00:10:35.553 "r_mbytes_per_sec": 0, 00:10:35.553 "w_mbytes_per_sec": 0 00:10:35.553 }, 00:10:35.553 "claimed": false, 00:10:35.553 "zoned": false, 00:10:35.553 "supported_io_types": { 00:10:35.553 "read": true, 00:10:35.553 "write": true, 00:10:35.553 "unmap": true, 00:10:35.553 "flush": true, 00:10:35.553 "reset": true, 00:10:35.553 "nvme_admin": false, 00:10:35.553 "nvme_io": false, 00:10:35.553 "nvme_io_md": false, 00:10:35.553 "write_zeroes": true, 00:10:35.553 "zcopy": true, 00:10:35.553 "get_zone_info": false, 00:10:35.553 "zone_management": false, 00:10:35.553 "zone_append": false, 00:10:35.553 "compare": false, 00:10:35.553 "compare_and_write": false, 00:10:35.553 "abort": true, 00:10:35.553 "seek_hole": false, 00:10:35.553 "seek_data": false, 00:10:35.553 "copy": true, 00:10:35.553 "nvme_iov_md": false 00:10:35.553 }, 00:10:35.553 "memory_domains": [ 00:10:35.553 { 00:10:35.553 "dma_device_id": "system", 00:10:35.553 "dma_device_type": 1 00:10:35.553 }, 00:10:35.553 { 00:10:35.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.553 "dma_device_type": 2 00:10:35.553 } 00:10:35.553 ], 00:10:35.553 "driver_specific": {} 00:10:35.553 } 00:10:35.553 ] 00:10:35.553 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:35.553 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:35.553 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:35.553 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.553 BaseBdev3 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.812 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.072 [ 00:10:36.072 { 00:10:36.072 "name": "BaseBdev3", 00:10:36.072 "aliases": [ 00:10:36.072 "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7" 00:10:36.072 ], 00:10:36.072 "product_name": "Malloc disk", 00:10:36.072 "block_size": 512, 00:10:36.072 "num_blocks": 65536, 00:10:36.072 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:36.072 "assigned_rate_limits": { 00:10:36.072 "rw_ios_per_sec": 0, 00:10:36.072 "rw_mbytes_per_sec": 0, 00:10:36.072 "r_mbytes_per_sec": 0, 00:10:36.072 "w_mbytes_per_sec": 0 00:10:36.072 }, 00:10:36.072 "claimed": false, 00:10:36.072 "zoned": false, 00:10:36.072 "supported_io_types": { 00:10:36.072 "read": true, 00:10:36.072 "write": true, 00:10:36.072 "unmap": true, 00:10:36.072 "flush": true, 00:10:36.072 "reset": true, 00:10:36.072 "nvme_admin": false, 00:10:36.072 "nvme_io": false, 00:10:36.072 "nvme_io_md": false, 00:10:36.072 "write_zeroes": true, 00:10:36.072 "zcopy": true, 00:10:36.072 "get_zone_info": false, 00:10:36.072 "zone_management": false, 00:10:36.072 "zone_append": false, 00:10:36.072 "compare": false, 00:10:36.072 "compare_and_write": false, 00:10:36.072 "abort": true, 00:10:36.072 "seek_hole": false, 00:10:36.072 "seek_data": false, 00:10:36.072 "copy": true, 00:10:36.072 "nvme_iov_md": false 00:10:36.072 }, 00:10:36.072 "memory_domains": [ 00:10:36.072 { 00:10:36.072 "dma_device_id": "system", 00:10:36.072 "dma_device_type": 1 00:10:36.072 }, 00:10:36.072 { 00:10:36.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.072 "dma_device_type": 2 00:10:36.072 } 00:10:36.072 ], 00:10:36.072 "driver_specific": {} 00:10:36.072 } 00:10:36.072 ] 00:10:36.072 06:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:36.072 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:36.072 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:36.072 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:36.332 [2024-08-13 06:05:37.938681] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.332 [2024-08-13 06:05:37.938748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.332 [2024-08-13 06:05:37.938774] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.332 [2024-08-13 06:05:37.940576] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.332 06:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.591 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.591 "name": "Existed_Raid", 00:10:36.591 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:36.591 "strip_size_kb": 64, 00:10:36.591 "state": "configuring", 00:10:36.591 "raid_level": "concat", 00:10:36.591 "superblock": true, 00:10:36.591 "num_base_bdevs": 3, 00:10:36.591 "num_base_bdevs_discovered": 2, 00:10:36.591 "num_base_bdevs_operational": 3, 00:10:36.591 "base_bdevs_list": [ 00:10:36.591 { 00:10:36.591 "name": "BaseBdev1", 00:10:36.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.591 "is_configured": false, 00:10:36.591 "data_offset": 0, 00:10:36.591 "data_size": 0 00:10:36.591 }, 00:10:36.591 { 00:10:36.591 "name": "BaseBdev2", 00:10:36.591 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:36.591 "is_configured": true, 00:10:36.591 "data_offset": 2048, 00:10:36.591 "data_size": 63488 00:10:36.591 }, 00:10:36.591 { 00:10:36.591 "name": "BaseBdev3", 00:10:36.591 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:36.591 "is_configured": true, 00:10:36.591 "data_offset": 2048, 00:10:36.591 "data_size": 63488 00:10:36.591 } 00:10:36.591 ] 00:10:36.591 }' 00:10:36.591 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.591 06:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:37.160 [2024-08-13 06:05:38.889026] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.160 06:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.420 06:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.420 "name": "Existed_Raid", 00:10:37.420 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:37.420 "strip_size_kb": 64, 00:10:37.420 "state": "configuring", 00:10:37.420 "raid_level": "concat", 00:10:37.420 "superblock": true, 00:10:37.420 "num_base_bdevs": 3, 00:10:37.420 "num_base_bdevs_discovered": 1, 00:10:37.420 "num_base_bdevs_operational": 3, 00:10:37.420 "base_bdevs_list": [ 00:10:37.420 { 00:10:37.420 "name": "BaseBdev1", 00:10:37.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.420 "is_configured": false, 00:10:37.420 "data_offset": 0, 00:10:37.420 "data_size": 0 00:10:37.420 }, 00:10:37.420 { 00:10:37.420 "name": null, 00:10:37.420 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:37.420 "is_configured": false, 00:10:37.420 "data_offset": 2048, 00:10:37.420 "data_size": 63488 00:10:37.420 }, 00:10:37.420 { 00:10:37.420 "name": "BaseBdev3", 00:10:37.420 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:37.420 "is_configured": true, 00:10:37.420 "data_offset": 2048, 00:10:37.420 "data_size": 63488 00:10:37.420 } 00:10:37.420 ] 00:10:37.420 }' 00:10:37.420 06:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.420 06:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.989 06:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.989 06:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.249 06:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:38.249 06:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.509 [2024-08-13 06:05:40.042289] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.509 BaseBdev1 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:38.509 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.769 [ 00:10:38.769 { 00:10:38.769 "name": "BaseBdev1", 00:10:38.769 "aliases": [ 00:10:38.769 "3b9d9874-a40f-4c44-9551-75c167ed0c30" 00:10:38.769 ], 00:10:38.769 "product_name": "Malloc disk", 00:10:38.769 "block_size": 512, 00:10:38.769 "num_blocks": 65536, 00:10:38.769 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:38.769 "assigned_rate_limits": { 00:10:38.769 "rw_ios_per_sec": 0, 00:10:38.769 "rw_mbytes_per_sec": 0, 00:10:38.769 "r_mbytes_per_sec": 0, 00:10:38.769 "w_mbytes_per_sec": 0 00:10:38.769 }, 00:10:38.769 "claimed": true, 00:10:38.769 "claim_type": "exclusive_write", 00:10:38.769 "zoned": false, 00:10:38.769 "supported_io_types": { 00:10:38.769 "read": true, 00:10:38.769 "write": true, 00:10:38.769 "unmap": true, 00:10:38.769 "flush": true, 00:10:38.769 "reset": true, 00:10:38.769 "nvme_admin": false, 00:10:38.769 "nvme_io": false, 00:10:38.769 "nvme_io_md": false, 00:10:38.769 "write_zeroes": true, 00:10:38.769 "zcopy": true, 00:10:38.769 "get_zone_info": false, 00:10:38.769 "zone_management": false, 00:10:38.769 "zone_append": false, 00:10:38.769 "compare": false, 00:10:38.769 "compare_and_write": false, 00:10:38.769 "abort": true, 00:10:38.769 "seek_hole": false, 00:10:38.769 "seek_data": false, 00:10:38.769 "copy": true, 00:10:38.769 "nvme_iov_md": false 00:10:38.769 }, 00:10:38.769 "memory_domains": [ 00:10:38.769 { 00:10:38.769 "dma_device_id": "system", 00:10:38.769 "dma_device_type": 1 00:10:38.769 }, 00:10:38.769 { 00:10:38.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.769 "dma_device_type": 2 00:10:38.769 } 00:10:38.769 ], 00:10:38.769 "driver_specific": {} 00:10:38.769 } 00:10:38.769 ] 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.769 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.028 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.029 "name": "Existed_Raid", 00:10:39.029 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:39.029 "strip_size_kb": 64, 00:10:39.029 "state": "configuring", 00:10:39.029 "raid_level": "concat", 00:10:39.029 "superblock": true, 00:10:39.029 "num_base_bdevs": 3, 00:10:39.029 "num_base_bdevs_discovered": 2, 00:10:39.029 "num_base_bdevs_operational": 3, 00:10:39.029 "base_bdevs_list": [ 00:10:39.029 { 00:10:39.029 "name": "BaseBdev1", 00:10:39.029 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:39.029 "is_configured": true, 00:10:39.029 "data_offset": 2048, 00:10:39.029 "data_size": 63488 00:10:39.029 }, 00:10:39.029 { 00:10:39.029 "name": null, 00:10:39.029 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:39.029 "is_configured": false, 00:10:39.029 "data_offset": 2048, 00:10:39.029 "data_size": 63488 00:10:39.029 }, 00:10:39.029 { 00:10:39.029 "name": "BaseBdev3", 00:10:39.029 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:39.029 "is_configured": true, 00:10:39.029 "data_offset": 2048, 00:10:39.029 "data_size": 63488 00:10:39.029 } 00:10:39.029 ] 00:10:39.029 }' 00:10:39.029 06:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.029 06:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.600 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.600 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.600 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:39.600 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:39.860 [2024-08-13 06:05:41.555769] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.860 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.120 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:40.120 "name": "Existed_Raid", 00:10:40.120 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:40.120 "strip_size_kb": 64, 00:10:40.120 "state": "configuring", 00:10:40.120 "raid_level": "concat", 00:10:40.120 "superblock": true, 00:10:40.121 "num_base_bdevs": 3, 00:10:40.121 "num_base_bdevs_discovered": 1, 00:10:40.121 "num_base_bdevs_operational": 3, 00:10:40.121 "base_bdevs_list": [ 00:10:40.121 { 00:10:40.121 "name": "BaseBdev1", 00:10:40.121 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:40.121 "is_configured": true, 00:10:40.121 "data_offset": 2048, 00:10:40.121 "data_size": 63488 00:10:40.121 }, 00:10:40.121 { 00:10:40.121 "name": null, 00:10:40.121 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:40.121 "is_configured": false, 00:10:40.121 "data_offset": 2048, 00:10:40.121 "data_size": 63488 00:10:40.121 }, 00:10:40.121 { 00:10:40.121 "name": null, 00:10:40.121 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:40.121 "is_configured": false, 00:10:40.121 "data_offset": 2048, 00:10:40.121 "data_size": 63488 00:10:40.121 } 00:10:40.121 ] 00:10:40.121 }' 00:10:40.121 06:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:40.121 06:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.691 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.691 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.950 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:40.950 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.950 [2024-08-13 06:05:42.721790] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.210 "name": "Existed_Raid", 00:10:41.210 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:41.210 "strip_size_kb": 64, 00:10:41.210 "state": "configuring", 00:10:41.210 "raid_level": "concat", 00:10:41.210 "superblock": true, 00:10:41.210 "num_base_bdevs": 3, 00:10:41.210 "num_base_bdevs_discovered": 2, 00:10:41.210 "num_base_bdevs_operational": 3, 00:10:41.210 "base_bdevs_list": [ 00:10:41.210 { 00:10:41.210 "name": "BaseBdev1", 00:10:41.210 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:41.210 "is_configured": true, 00:10:41.210 "data_offset": 2048, 00:10:41.210 "data_size": 63488 00:10:41.210 }, 00:10:41.210 { 00:10:41.210 "name": null, 00:10:41.210 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:41.210 "is_configured": false, 00:10:41.210 "data_offset": 2048, 00:10:41.210 "data_size": 63488 00:10:41.210 }, 00:10:41.210 { 00:10:41.210 "name": "BaseBdev3", 00:10:41.210 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:41.210 "is_configured": true, 00:10:41.210 "data_offset": 2048, 00:10:41.210 "data_size": 63488 00:10:41.210 } 00:10:41.210 ] 00:10:41.210 }' 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.210 06:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.779 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.779 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.037 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:42.038 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:42.297 [2024-08-13 06:05:43.844025] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.297 06:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.297 06:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.297 "name": "Existed_Raid", 00:10:42.297 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:42.297 "strip_size_kb": 64, 00:10:42.297 "state": "configuring", 00:10:42.297 "raid_level": "concat", 00:10:42.297 "superblock": true, 00:10:42.297 "num_base_bdevs": 3, 00:10:42.297 "num_base_bdevs_discovered": 1, 00:10:42.297 "num_base_bdevs_operational": 3, 00:10:42.297 "base_bdevs_list": [ 00:10:42.297 { 00:10:42.297 "name": null, 00:10:42.297 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:42.297 "is_configured": false, 00:10:42.297 "data_offset": 2048, 00:10:42.297 "data_size": 63488 00:10:42.297 }, 00:10:42.297 { 00:10:42.297 "name": null, 00:10:42.297 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:42.297 "is_configured": false, 00:10:42.297 "data_offset": 2048, 00:10:42.297 "data_size": 63488 00:10:42.297 }, 00:10:42.297 { 00:10:42.297 "name": "BaseBdev3", 00:10:42.297 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:42.297 "is_configured": true, 00:10:42.297 "data_offset": 2048, 00:10:42.297 "data_size": 63488 00:10:42.297 } 00:10:42.298 ] 00:10:42.298 }' 00:10:42.298 06:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.298 06:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.233 06:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.233 06:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.233 06:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:43.234 06:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:43.493 [2024-08-13 06:05:45.036636] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.493 "name": "Existed_Raid", 00:10:43.493 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:43.493 "strip_size_kb": 64, 00:10:43.493 "state": "configuring", 00:10:43.493 "raid_level": "concat", 00:10:43.493 "superblock": true, 00:10:43.493 "num_base_bdevs": 3, 00:10:43.493 "num_base_bdevs_discovered": 2, 00:10:43.493 "num_base_bdevs_operational": 3, 00:10:43.493 "base_bdevs_list": [ 00:10:43.493 { 00:10:43.493 "name": null, 00:10:43.493 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:43.493 "is_configured": false, 00:10:43.493 "data_offset": 2048, 00:10:43.493 "data_size": 63488 00:10:43.493 }, 00:10:43.493 { 00:10:43.493 "name": "BaseBdev2", 00:10:43.493 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:43.493 "is_configured": true, 00:10:43.493 "data_offset": 2048, 00:10:43.493 "data_size": 63488 00:10:43.493 }, 00:10:43.493 { 00:10:43.493 "name": "BaseBdev3", 00:10:43.493 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:43.493 "is_configured": true, 00:10:43.493 "data_offset": 2048, 00:10:43.493 "data_size": 63488 00:10:43.493 } 00:10:43.493 ] 00:10:43.493 }' 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.493 06:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.061 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:44.061 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.321 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:44.321 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.321 06:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:44.581 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3b9d9874-a40f-4c44-9551-75c167ed0c30 00:10:44.581 [2024-08-13 06:05:46.357284] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:44.581 [2024-08-13 06:05:46.357454] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:44.581 [2024-08-13 06:05:46.357475] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:44.581 [2024-08-13 06:05:46.357692] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:10:44.581 [2024-08-13 06:05:46.357797] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:44.581 [2024-08-13 06:05:46.357807] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:44.581 [2024-08-13 06:05:46.357901] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.581 NewBaseBdev 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:44.840 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.100 [ 00:10:45.100 { 00:10:45.100 "name": "NewBaseBdev", 00:10:45.100 "aliases": [ 00:10:45.100 "3b9d9874-a40f-4c44-9551-75c167ed0c30" 00:10:45.100 ], 00:10:45.100 "product_name": "Malloc disk", 00:10:45.100 "block_size": 512, 00:10:45.100 "num_blocks": 65536, 00:10:45.100 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:45.100 "assigned_rate_limits": { 00:10:45.100 "rw_ios_per_sec": 0, 00:10:45.100 "rw_mbytes_per_sec": 0, 00:10:45.100 "r_mbytes_per_sec": 0, 00:10:45.100 "w_mbytes_per_sec": 0 00:10:45.100 }, 00:10:45.100 "claimed": true, 00:10:45.100 "claim_type": "exclusive_write", 00:10:45.100 "zoned": false, 00:10:45.100 "supported_io_types": { 00:10:45.100 "read": true, 00:10:45.100 "write": true, 00:10:45.100 "unmap": true, 00:10:45.100 "flush": true, 00:10:45.100 "reset": true, 00:10:45.100 "nvme_admin": false, 00:10:45.100 "nvme_io": false, 00:10:45.100 "nvme_io_md": false, 00:10:45.100 "write_zeroes": true, 00:10:45.100 "zcopy": true, 00:10:45.100 "get_zone_info": false, 00:10:45.100 "zone_management": false, 00:10:45.100 "zone_append": false, 00:10:45.100 "compare": false, 00:10:45.100 "compare_and_write": false, 00:10:45.100 "abort": true, 00:10:45.100 "seek_hole": false, 00:10:45.100 "seek_data": false, 00:10:45.100 "copy": true, 00:10:45.100 "nvme_iov_md": false 00:10:45.100 }, 00:10:45.100 "memory_domains": [ 00:10:45.100 { 00:10:45.100 "dma_device_id": "system", 00:10:45.100 "dma_device_type": 1 00:10:45.100 }, 00:10:45.100 { 00:10:45.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.100 "dma_device_type": 2 00:10:45.100 } 00:10:45.100 ], 00:10:45.100 "driver_specific": {} 00:10:45.100 } 00:10:45.100 ] 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.100 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.360 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.360 "name": "Existed_Raid", 00:10:45.360 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:45.360 "strip_size_kb": 64, 00:10:45.360 "state": "online", 00:10:45.360 "raid_level": "concat", 00:10:45.360 "superblock": true, 00:10:45.360 "num_base_bdevs": 3, 00:10:45.360 "num_base_bdevs_discovered": 3, 00:10:45.360 "num_base_bdevs_operational": 3, 00:10:45.360 "base_bdevs_list": [ 00:10:45.360 { 00:10:45.360 "name": "NewBaseBdev", 00:10:45.360 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:45.360 "is_configured": true, 00:10:45.360 "data_offset": 2048, 00:10:45.360 "data_size": 63488 00:10:45.360 }, 00:10:45.360 { 00:10:45.360 "name": "BaseBdev2", 00:10:45.360 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:45.360 "is_configured": true, 00:10:45.360 "data_offset": 2048, 00:10:45.360 "data_size": 63488 00:10:45.360 }, 00:10:45.360 { 00:10:45.360 "name": "BaseBdev3", 00:10:45.360 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:45.360 "is_configured": true, 00:10:45.360 "data_offset": 2048, 00:10:45.360 "data_size": 63488 00:10:45.360 } 00:10:45.360 ] 00:10:45.360 }' 00:10:45.360 06:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.360 06:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:45.929 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:45.929 [2024-08-13 06:05:47.703582] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:46.189 "name": "Existed_Raid", 00:10:46.189 "aliases": [ 00:10:46.189 "6e4c4460-fd63-4aca-bf59-09fa051785f2" 00:10:46.189 ], 00:10:46.189 "product_name": "Raid Volume", 00:10:46.189 "block_size": 512, 00:10:46.189 "num_blocks": 190464, 00:10:46.189 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:46.189 "assigned_rate_limits": { 00:10:46.189 "rw_ios_per_sec": 0, 00:10:46.189 "rw_mbytes_per_sec": 0, 00:10:46.189 "r_mbytes_per_sec": 0, 00:10:46.189 "w_mbytes_per_sec": 0 00:10:46.189 }, 00:10:46.189 "claimed": false, 00:10:46.189 "zoned": false, 00:10:46.189 "supported_io_types": { 00:10:46.189 "read": true, 00:10:46.189 "write": true, 00:10:46.189 "unmap": true, 00:10:46.189 "flush": true, 00:10:46.189 "reset": true, 00:10:46.189 "nvme_admin": false, 00:10:46.189 "nvme_io": false, 00:10:46.189 "nvme_io_md": false, 00:10:46.189 "write_zeroes": true, 00:10:46.189 "zcopy": false, 00:10:46.189 "get_zone_info": false, 00:10:46.189 "zone_management": false, 00:10:46.189 "zone_append": false, 00:10:46.189 "compare": false, 00:10:46.189 "compare_and_write": false, 00:10:46.189 "abort": false, 00:10:46.189 "seek_hole": false, 00:10:46.189 "seek_data": false, 00:10:46.189 "copy": false, 00:10:46.189 "nvme_iov_md": false 00:10:46.189 }, 00:10:46.189 "memory_domains": [ 00:10:46.189 { 00:10:46.189 "dma_device_id": "system", 00:10:46.189 "dma_device_type": 1 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.189 "dma_device_type": 2 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "dma_device_id": "system", 00:10:46.189 "dma_device_type": 1 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.189 "dma_device_type": 2 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "dma_device_id": "system", 00:10:46.189 "dma_device_type": 1 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.189 "dma_device_type": 2 00:10:46.189 } 00:10:46.189 ], 00:10:46.189 "driver_specific": { 00:10:46.189 "raid": { 00:10:46.189 "uuid": "6e4c4460-fd63-4aca-bf59-09fa051785f2", 00:10:46.189 "strip_size_kb": 64, 00:10:46.189 "state": "online", 00:10:46.189 "raid_level": "concat", 00:10:46.189 "superblock": true, 00:10:46.189 "num_base_bdevs": 3, 00:10:46.189 "num_base_bdevs_discovered": 3, 00:10:46.189 "num_base_bdevs_operational": 3, 00:10:46.189 "base_bdevs_list": [ 00:10:46.189 { 00:10:46.189 "name": "NewBaseBdev", 00:10:46.189 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:46.189 "is_configured": true, 00:10:46.189 "data_offset": 2048, 00:10:46.189 "data_size": 63488 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "name": "BaseBdev2", 00:10:46.189 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:46.189 "is_configured": true, 00:10:46.189 "data_offset": 2048, 00:10:46.189 "data_size": 63488 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "name": "BaseBdev3", 00:10:46.189 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:46.189 "is_configured": true, 00:10:46.189 "data_offset": 2048, 00:10:46.189 "data_size": 63488 00:10:46.189 } 00:10:46.189 ] 00:10:46.189 } 00:10:46.189 } 00:10:46.189 }' 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:46.189 BaseBdev2 00:10:46.189 BaseBdev3' 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:46.189 "name": "NewBaseBdev", 00:10:46.189 "aliases": [ 00:10:46.189 "3b9d9874-a40f-4c44-9551-75c167ed0c30" 00:10:46.189 ], 00:10:46.189 "product_name": "Malloc disk", 00:10:46.189 "block_size": 512, 00:10:46.189 "num_blocks": 65536, 00:10:46.189 "uuid": "3b9d9874-a40f-4c44-9551-75c167ed0c30", 00:10:46.189 "assigned_rate_limits": { 00:10:46.189 "rw_ios_per_sec": 0, 00:10:46.189 "rw_mbytes_per_sec": 0, 00:10:46.189 "r_mbytes_per_sec": 0, 00:10:46.189 "w_mbytes_per_sec": 0 00:10:46.189 }, 00:10:46.189 "claimed": true, 00:10:46.189 "claim_type": "exclusive_write", 00:10:46.189 "zoned": false, 00:10:46.189 "supported_io_types": { 00:10:46.189 "read": true, 00:10:46.189 "write": true, 00:10:46.189 "unmap": true, 00:10:46.189 "flush": true, 00:10:46.189 "reset": true, 00:10:46.189 "nvme_admin": false, 00:10:46.189 "nvme_io": false, 00:10:46.189 "nvme_io_md": false, 00:10:46.189 "write_zeroes": true, 00:10:46.189 "zcopy": true, 00:10:46.189 "get_zone_info": false, 00:10:46.189 "zone_management": false, 00:10:46.189 "zone_append": false, 00:10:46.189 "compare": false, 00:10:46.189 "compare_and_write": false, 00:10:46.189 "abort": true, 00:10:46.189 "seek_hole": false, 00:10:46.189 "seek_data": false, 00:10:46.189 "copy": true, 00:10:46.189 "nvme_iov_md": false 00:10:46.189 }, 00:10:46.189 "memory_domains": [ 00:10:46.189 { 00:10:46.189 "dma_device_id": "system", 00:10:46.189 "dma_device_type": 1 00:10:46.189 }, 00:10:46.189 { 00:10:46.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.189 "dma_device_type": 2 00:10:46.189 } 00:10:46.189 ], 00:10:46.189 "driver_specific": {} 00:10:46.189 }' 00:10:46.189 06:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:46.449 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:46.711 "name": "BaseBdev2", 00:10:46.711 "aliases": [ 00:10:46.711 "653a9a8e-56e8-458b-84cf-810b43b953dc" 00:10:46.711 ], 00:10:46.711 "product_name": "Malloc disk", 00:10:46.711 "block_size": 512, 00:10:46.711 "num_blocks": 65536, 00:10:46.711 "uuid": "653a9a8e-56e8-458b-84cf-810b43b953dc", 00:10:46.711 "assigned_rate_limits": { 00:10:46.711 "rw_ios_per_sec": 0, 00:10:46.711 "rw_mbytes_per_sec": 0, 00:10:46.711 "r_mbytes_per_sec": 0, 00:10:46.711 "w_mbytes_per_sec": 0 00:10:46.711 }, 00:10:46.711 "claimed": true, 00:10:46.711 "claim_type": "exclusive_write", 00:10:46.711 "zoned": false, 00:10:46.711 "supported_io_types": { 00:10:46.711 "read": true, 00:10:46.711 "write": true, 00:10:46.711 "unmap": true, 00:10:46.711 "flush": true, 00:10:46.711 "reset": true, 00:10:46.711 "nvme_admin": false, 00:10:46.711 "nvme_io": false, 00:10:46.711 "nvme_io_md": false, 00:10:46.711 "write_zeroes": true, 00:10:46.711 "zcopy": true, 00:10:46.711 "get_zone_info": false, 00:10:46.711 "zone_management": false, 00:10:46.711 "zone_append": false, 00:10:46.711 "compare": false, 00:10:46.711 "compare_and_write": false, 00:10:46.711 "abort": true, 00:10:46.711 "seek_hole": false, 00:10:46.711 "seek_data": false, 00:10:46.711 "copy": true, 00:10:46.711 "nvme_iov_md": false 00:10:46.711 }, 00:10:46.711 "memory_domains": [ 00:10:46.711 { 00:10:46.711 "dma_device_id": "system", 00:10:46.711 "dma_device_type": 1 00:10:46.711 }, 00:10:46.711 { 00:10:46.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.711 "dma_device_type": 2 00:10:46.711 } 00:10:46.711 ], 00:10:46.711 "driver_specific": {} 00:10:46.711 }' 00:10:46.711 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:46.978 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:47.235 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:47.236 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:47.236 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:47.236 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:47.236 06:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:47.494 "name": "BaseBdev3", 00:10:47.494 "aliases": [ 00:10:47.494 "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7" 00:10:47.494 ], 00:10:47.494 "product_name": "Malloc disk", 00:10:47.494 "block_size": 512, 00:10:47.494 "num_blocks": 65536, 00:10:47.494 "uuid": "1fb6e5ad-9c3a-4af7-8a4a-1cb4230726f7", 00:10:47.494 "assigned_rate_limits": { 00:10:47.494 "rw_ios_per_sec": 0, 00:10:47.494 "rw_mbytes_per_sec": 0, 00:10:47.494 "r_mbytes_per_sec": 0, 00:10:47.494 "w_mbytes_per_sec": 0 00:10:47.494 }, 00:10:47.494 "claimed": true, 00:10:47.494 "claim_type": "exclusive_write", 00:10:47.494 "zoned": false, 00:10:47.494 "supported_io_types": { 00:10:47.494 "read": true, 00:10:47.494 "write": true, 00:10:47.494 "unmap": true, 00:10:47.494 "flush": true, 00:10:47.494 "reset": true, 00:10:47.494 "nvme_admin": false, 00:10:47.494 "nvme_io": false, 00:10:47.494 "nvme_io_md": false, 00:10:47.494 "write_zeroes": true, 00:10:47.494 "zcopy": true, 00:10:47.494 "get_zone_info": false, 00:10:47.494 "zone_management": false, 00:10:47.494 "zone_append": false, 00:10:47.494 "compare": false, 00:10:47.494 "compare_and_write": false, 00:10:47.494 "abort": true, 00:10:47.494 "seek_hole": false, 00:10:47.494 "seek_data": false, 00:10:47.494 "copy": true, 00:10:47.494 "nvme_iov_md": false 00:10:47.494 }, 00:10:47.494 "memory_domains": [ 00:10:47.494 { 00:10:47.494 "dma_device_id": "system", 00:10:47.494 "dma_device_type": 1 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.494 "dma_device_type": 2 00:10:47.494 } 00:10:47.494 ], 00:10:47.494 "driver_specific": {} 00:10:47.494 }' 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:47.494 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:47.753 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:47.753 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:47.753 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:48.012 [2024-08-13 06:05:49.548505] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.012 [2024-08-13 06:05:49.548553] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.012 [2024-08-13 06:05:49.548659] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.012 [2024-08-13 06:05:49.548718] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.012 [2024-08-13 06:05:49.548728] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 78653 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 78653 ']' 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 78653 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78653 00:10:48.012 killing process with pid 78653 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78653' 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 78653 00:10:48.012 [2024-08-13 06:05:49.605050] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.012 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 78653 00:10:48.012 [2024-08-13 06:05:49.635443] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.271 06:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:48.271 00:10:48.271 real 0m24.600s 00:10:48.271 user 0m45.756s 00:10:48.271 sys 0m3.733s 00:10:48.271 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:48.271 ************************************ 00:10:48.271 END TEST raid_state_function_test_sb 00:10:48.271 ************************************ 00:10:48.271 06:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.271 06:05:49 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:48.271 06:05:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:48.271 06:05:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.271 06:05:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.271 ************************************ 00:10:48.271 START TEST raid_superblock_test 00:10:48.271 ************************************ 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=79548 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 79548 /var/tmp/spdk-raid.sock 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 79548 ']' 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:48.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:48.271 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:48.272 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:48.272 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:48.272 06:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.272 [2024-08-13 06:05:50.013310] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:10:48.272 [2024-08-13 06:05:50.013521] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79548 ] 00:10:48.532 [2024-08-13 06:05:50.159233] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.532 [2024-08-13 06:05:50.205146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.532 [2024-08-13 06:05:50.247450] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.532 [2024-08-13 06:05:50.247486] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.100 06:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:49.358 malloc1 00:10:49.358 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.702 [2024-08-13 06:05:51.220011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.702 [2024-08-13 06:05:51.220198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.702 [2024-08-13 06:05:51.220251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:49.702 [2024-08-13 06:05:51.220287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.702 [2024-08-13 06:05:51.222465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.702 [2024-08-13 06:05:51.222540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.702 pt1 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:49.702 malloc2 00:10:49.702 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.961 [2024-08-13 06:05:51.624124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.961 [2024-08-13 06:05:51.624281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.961 [2024-08-13 06:05:51.624320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:49.961 [2024-08-13 06:05:51.624347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.961 [2024-08-13 06:05:51.626531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.961 [2024-08-13 06:05:51.626602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.961 pt2 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.961 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:50.220 malloc3 00:10:50.220 06:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.480 [2024-08-13 06:05:52.037349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.480 [2024-08-13 06:05:52.037497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.480 [2024-08-13 06:05:52.037539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:50.480 [2024-08-13 06:05:52.037566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.480 [2024-08-13 06:05:52.039723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.480 [2024-08-13 06:05:52.039797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.480 pt3 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:50.480 [2024-08-13 06:05:52.221198] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.480 [2024-08-13 06:05:52.223169] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.480 [2024-08-13 06:05:52.223230] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.480 [2024-08-13 06:05:52.223397] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:50.480 [2024-08-13 06:05:52.223412] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:50.480 [2024-08-13 06:05:52.223742] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:50.480 [2024-08-13 06:05:52.223872] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:50.480 [2024-08-13 06:05:52.223880] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:50.480 [2024-08-13 06:05:52.224059] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.480 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.739 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:50.739 "name": "raid_bdev1", 00:10:50.739 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:50.739 "strip_size_kb": 64, 00:10:50.739 "state": "online", 00:10:50.739 "raid_level": "concat", 00:10:50.739 "superblock": true, 00:10:50.739 "num_base_bdevs": 3, 00:10:50.739 "num_base_bdevs_discovered": 3, 00:10:50.739 "num_base_bdevs_operational": 3, 00:10:50.739 "base_bdevs_list": [ 00:10:50.739 { 00:10:50.739 "name": "pt1", 00:10:50.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.740 "is_configured": true, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 }, 00:10:50.740 { 00:10:50.740 "name": "pt2", 00:10:50.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.740 "is_configured": true, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 }, 00:10:50.740 { 00:10:50.740 "name": "pt3", 00:10:50.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.740 "is_configured": true, 00:10:50.740 "data_offset": 2048, 00:10:50.740 "data_size": 63488 00:10:50.740 } 00:10:50.740 ] 00:10:50.740 }' 00:10:50.740 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:50.740 06:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:51.307 06:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:51.567 [2024-08-13 06:05:53.151800] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.567 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:51.567 "name": "raid_bdev1", 00:10:51.567 "aliases": [ 00:10:51.567 "1634d543-653a-401b-90d0-fa1892a1158f" 00:10:51.567 ], 00:10:51.567 "product_name": "Raid Volume", 00:10:51.567 "block_size": 512, 00:10:51.567 "num_blocks": 190464, 00:10:51.567 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:51.567 "assigned_rate_limits": { 00:10:51.567 "rw_ios_per_sec": 0, 00:10:51.567 "rw_mbytes_per_sec": 0, 00:10:51.567 "r_mbytes_per_sec": 0, 00:10:51.567 "w_mbytes_per_sec": 0 00:10:51.567 }, 00:10:51.567 "claimed": false, 00:10:51.567 "zoned": false, 00:10:51.567 "supported_io_types": { 00:10:51.567 "read": true, 00:10:51.567 "write": true, 00:10:51.567 "unmap": true, 00:10:51.567 "flush": true, 00:10:51.567 "reset": true, 00:10:51.567 "nvme_admin": false, 00:10:51.567 "nvme_io": false, 00:10:51.567 "nvme_io_md": false, 00:10:51.567 "write_zeroes": true, 00:10:51.567 "zcopy": false, 00:10:51.567 "get_zone_info": false, 00:10:51.567 "zone_management": false, 00:10:51.567 "zone_append": false, 00:10:51.567 "compare": false, 00:10:51.567 "compare_and_write": false, 00:10:51.567 "abort": false, 00:10:51.567 "seek_hole": false, 00:10:51.567 "seek_data": false, 00:10:51.567 "copy": false, 00:10:51.567 "nvme_iov_md": false 00:10:51.567 }, 00:10:51.567 "memory_domains": [ 00:10:51.567 { 00:10:51.567 "dma_device_id": "system", 00:10:51.567 "dma_device_type": 1 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.567 "dma_device_type": 2 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "dma_device_id": "system", 00:10:51.567 "dma_device_type": 1 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.567 "dma_device_type": 2 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "dma_device_id": "system", 00:10:51.567 "dma_device_type": 1 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.567 "dma_device_type": 2 00:10:51.567 } 00:10:51.567 ], 00:10:51.567 "driver_specific": { 00:10:51.567 "raid": { 00:10:51.567 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:51.567 "strip_size_kb": 64, 00:10:51.567 "state": "online", 00:10:51.567 "raid_level": "concat", 00:10:51.567 "superblock": true, 00:10:51.567 "num_base_bdevs": 3, 00:10:51.567 "num_base_bdevs_discovered": 3, 00:10:51.567 "num_base_bdevs_operational": 3, 00:10:51.567 "base_bdevs_list": [ 00:10:51.567 { 00:10:51.567 "name": "pt1", 00:10:51.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.567 "is_configured": true, 00:10:51.567 "data_offset": 2048, 00:10:51.567 "data_size": 63488 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "name": "pt2", 00:10:51.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.567 "is_configured": true, 00:10:51.567 "data_offset": 2048, 00:10:51.567 "data_size": 63488 00:10:51.567 }, 00:10:51.567 { 00:10:51.567 "name": "pt3", 00:10:51.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.567 "is_configured": true, 00:10:51.567 "data_offset": 2048, 00:10:51.567 "data_size": 63488 00:10:51.567 } 00:10:51.567 ] 00:10:51.567 } 00:10:51.567 } 00:10:51.567 }' 00:10:51.567 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.567 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:51.567 pt2 00:10:51.567 pt3' 00:10:51.567 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:51.567 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:51.567 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:51.826 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:51.826 "name": "pt1", 00:10:51.826 "aliases": [ 00:10:51.826 "00000000-0000-0000-0000-000000000001" 00:10:51.826 ], 00:10:51.826 "product_name": "passthru", 00:10:51.826 "block_size": 512, 00:10:51.826 "num_blocks": 65536, 00:10:51.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.826 "assigned_rate_limits": { 00:10:51.827 "rw_ios_per_sec": 0, 00:10:51.827 "rw_mbytes_per_sec": 0, 00:10:51.827 "r_mbytes_per_sec": 0, 00:10:51.827 "w_mbytes_per_sec": 0 00:10:51.827 }, 00:10:51.827 "claimed": true, 00:10:51.827 "claim_type": "exclusive_write", 00:10:51.827 "zoned": false, 00:10:51.827 "supported_io_types": { 00:10:51.827 "read": true, 00:10:51.827 "write": true, 00:10:51.827 "unmap": true, 00:10:51.827 "flush": true, 00:10:51.827 "reset": true, 00:10:51.827 "nvme_admin": false, 00:10:51.827 "nvme_io": false, 00:10:51.827 "nvme_io_md": false, 00:10:51.827 "write_zeroes": true, 00:10:51.827 "zcopy": true, 00:10:51.827 "get_zone_info": false, 00:10:51.827 "zone_management": false, 00:10:51.827 "zone_append": false, 00:10:51.827 "compare": false, 00:10:51.827 "compare_and_write": false, 00:10:51.827 "abort": true, 00:10:51.827 "seek_hole": false, 00:10:51.827 "seek_data": false, 00:10:51.827 "copy": true, 00:10:51.827 "nvme_iov_md": false 00:10:51.827 }, 00:10:51.827 "memory_domains": [ 00:10:51.827 { 00:10:51.827 "dma_device_id": "system", 00:10:51.827 "dma_device_type": 1 00:10:51.827 }, 00:10:51.827 { 00:10:51.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.827 "dma_device_type": 2 00:10:51.827 } 00:10:51.827 ], 00:10:51.827 "driver_specific": { 00:10:51.827 "passthru": { 00:10:51.827 "name": "pt1", 00:10:51.827 "base_bdev_name": "malloc1" 00:10:51.827 } 00:10:51.827 } 00:10:51.827 }' 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.827 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:52.085 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:52.344 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:52.344 "name": "pt2", 00:10:52.344 "aliases": [ 00:10:52.344 "00000000-0000-0000-0000-000000000002" 00:10:52.344 ], 00:10:52.344 "product_name": "passthru", 00:10:52.344 "block_size": 512, 00:10:52.344 "num_blocks": 65536, 00:10:52.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.344 "assigned_rate_limits": { 00:10:52.344 "rw_ios_per_sec": 0, 00:10:52.344 "rw_mbytes_per_sec": 0, 00:10:52.344 "r_mbytes_per_sec": 0, 00:10:52.344 "w_mbytes_per_sec": 0 00:10:52.344 }, 00:10:52.344 "claimed": true, 00:10:52.344 "claim_type": "exclusive_write", 00:10:52.344 "zoned": false, 00:10:52.344 "supported_io_types": { 00:10:52.344 "read": true, 00:10:52.344 "write": true, 00:10:52.344 "unmap": true, 00:10:52.344 "flush": true, 00:10:52.344 "reset": true, 00:10:52.344 "nvme_admin": false, 00:10:52.344 "nvme_io": false, 00:10:52.344 "nvme_io_md": false, 00:10:52.344 "write_zeroes": true, 00:10:52.344 "zcopy": true, 00:10:52.344 "get_zone_info": false, 00:10:52.344 "zone_management": false, 00:10:52.344 "zone_append": false, 00:10:52.344 "compare": false, 00:10:52.344 "compare_and_write": false, 00:10:52.344 "abort": true, 00:10:52.344 "seek_hole": false, 00:10:52.344 "seek_data": false, 00:10:52.344 "copy": true, 00:10:52.344 "nvme_iov_md": false 00:10:52.344 }, 00:10:52.344 "memory_domains": [ 00:10:52.344 { 00:10:52.344 "dma_device_id": "system", 00:10:52.344 "dma_device_type": 1 00:10:52.344 }, 00:10:52.344 { 00:10:52.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.344 "dma_device_type": 2 00:10:52.344 } 00:10:52.344 ], 00:10:52.344 "driver_specific": { 00:10:52.344 "passthru": { 00:10:52.344 "name": "pt2", 00:10:52.344 "base_bdev_name": "malloc2" 00:10:52.344 } 00:10:52.344 } 00:10:52.344 }' 00:10:52.344 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.344 06:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.344 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:52.345 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.345 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.345 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:52.345 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:52.604 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:52.863 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:52.863 "name": "pt3", 00:10:52.863 "aliases": [ 00:10:52.863 "00000000-0000-0000-0000-000000000003" 00:10:52.863 ], 00:10:52.863 "product_name": "passthru", 00:10:52.863 "block_size": 512, 00:10:52.863 "num_blocks": 65536, 00:10:52.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.863 "assigned_rate_limits": { 00:10:52.863 "rw_ios_per_sec": 0, 00:10:52.863 "rw_mbytes_per_sec": 0, 00:10:52.863 "r_mbytes_per_sec": 0, 00:10:52.863 "w_mbytes_per_sec": 0 00:10:52.863 }, 00:10:52.863 "claimed": true, 00:10:52.863 "claim_type": "exclusive_write", 00:10:52.863 "zoned": false, 00:10:52.863 "supported_io_types": { 00:10:52.863 "read": true, 00:10:52.863 "write": true, 00:10:52.863 "unmap": true, 00:10:52.863 "flush": true, 00:10:52.863 "reset": true, 00:10:52.863 "nvme_admin": false, 00:10:52.863 "nvme_io": false, 00:10:52.863 "nvme_io_md": false, 00:10:52.863 "write_zeroes": true, 00:10:52.863 "zcopy": true, 00:10:52.863 "get_zone_info": false, 00:10:52.863 "zone_management": false, 00:10:52.863 "zone_append": false, 00:10:52.863 "compare": false, 00:10:52.863 "compare_and_write": false, 00:10:52.863 "abort": true, 00:10:52.863 "seek_hole": false, 00:10:52.863 "seek_data": false, 00:10:52.863 "copy": true, 00:10:52.863 "nvme_iov_md": false 00:10:52.863 }, 00:10:52.863 "memory_domains": [ 00:10:52.863 { 00:10:52.863 "dma_device_id": "system", 00:10:52.863 "dma_device_type": 1 00:10:52.863 }, 00:10:52.863 { 00:10:52.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.863 "dma_device_type": 2 00:10:52.863 } 00:10:52.863 ], 00:10:52.863 "driver_specific": { 00:10:52.863 "passthru": { 00:10:52.863 "name": "pt3", 00:10:52.863 "base_bdev_name": "malloc3" 00:10:52.863 } 00:10:52.863 } 00:10:52.863 }' 00:10:52.863 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.863 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.863 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:52.863 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.863 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:53.122 06:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:10:53.381 [2024-08-13 06:05:54.984830] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.381 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=1634d543-653a-401b-90d0-fa1892a1158f 00:10:53.381 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 1634d543-653a-401b-90d0-fa1892a1158f ']' 00:10:53.381 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:53.640 [2024-08-13 06:05:55.180204] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.640 [2024-08-13 06:05:55.180316] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.640 [2024-08-13 06:05:55.180427] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.640 [2024-08-13 06:05:55.180506] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.640 [2024-08-13 06:05:55.180518] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:53.640 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.640 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:10:53.640 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:10:53.640 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:10:53.640 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:10:53.640 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:53.900 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:10:53.900 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:54.159 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.159 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:54.418 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:54.418 06:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:54.418 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:54.678 [2024-08-13 06:05:56.350219] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:54.678 [2024-08-13 06:05:56.352118] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:54.678 [2024-08-13 06:05:56.352165] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:54.678 [2024-08-13 06:05:56.352218] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:54.678 [2024-08-13 06:05:56.352284] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:54.678 [2024-08-13 06:05:56.352301] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:54.678 [2024-08-13 06:05:56.352315] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.678 [2024-08-13 06:05:56.352324] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:54.678 request: 00:10:54.678 { 00:10:54.678 "name": "raid_bdev1", 00:10:54.678 "raid_level": "concat", 00:10:54.678 "base_bdevs": [ 00:10:54.678 "malloc1", 00:10:54.678 "malloc2", 00:10:54.678 "malloc3" 00:10:54.678 ], 00:10:54.678 "strip_size_kb": 64, 00:10:54.678 "superblock": false, 00:10:54.678 "method": "bdev_raid_create", 00:10:54.678 "req_id": 1 00:10:54.678 } 00:10:54.678 Got JSON-RPC error response 00:10:54.678 response: 00:10:54.678 { 00:10:54.678 "code": -17, 00:10:54.678 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:54.678 } 00:10:54.678 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:10:54.678 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:10:54.678 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:10:54.678 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:10:54.678 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.678 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:10:54.938 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:10:54.938 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:10:54.938 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:55.197 [2024-08-13 06:05:56.761404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:55.197 [2024-08-13 06:05:56.761552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.197 [2024-08-13 06:05:56.761577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:55.197 [2024-08-13 06:05:56.761586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.197 [2024-08-13 06:05:56.763749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.197 [2024-08-13 06:05:56.763785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:55.197 [2024-08-13 06:05:56.763866] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:55.197 [2024-08-13 06:05:56.763913] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.197 pt1 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.197 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.456 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.456 "name": "raid_bdev1", 00:10:55.456 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:55.456 "strip_size_kb": 64, 00:10:55.456 "state": "configuring", 00:10:55.456 "raid_level": "concat", 00:10:55.456 "superblock": true, 00:10:55.456 "num_base_bdevs": 3, 00:10:55.456 "num_base_bdevs_discovered": 1, 00:10:55.456 "num_base_bdevs_operational": 3, 00:10:55.456 "base_bdevs_list": [ 00:10:55.456 { 00:10:55.456 "name": "pt1", 00:10:55.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.456 "is_configured": true, 00:10:55.456 "data_offset": 2048, 00:10:55.456 "data_size": 63488 00:10:55.456 }, 00:10:55.456 { 00:10:55.456 "name": null, 00:10:55.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.456 "is_configured": false, 00:10:55.456 "data_offset": 2048, 00:10:55.456 "data_size": 63488 00:10:55.456 }, 00:10:55.456 { 00:10:55.456 "name": null, 00:10:55.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.456 "is_configured": false, 00:10:55.456 "data_offset": 2048, 00:10:55.456 "data_size": 63488 00:10:55.456 } 00:10:55.456 ] 00:10:55.456 }' 00:10:55.456 06:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.456 06:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.026 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:10:56.026 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.026 [2024-08-13 06:05:57.759782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.026 [2024-08-13 06:05:57.759940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.026 [2024-08-13 06:05:57.759983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:56.026 [2024-08-13 06:05:57.760010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.026 [2024-08-13 06:05:57.760465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.026 [2024-08-13 06:05:57.760523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.026 [2024-08-13 06:05:57.760637] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:56.026 [2024-08-13 06:05:57.760686] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.026 pt2 00:10:56.026 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:56.284 [2024-08-13 06:05:57.939506] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.284 06:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.543 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.543 "name": "raid_bdev1", 00:10:56.543 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:56.543 "strip_size_kb": 64, 00:10:56.543 "state": "configuring", 00:10:56.543 "raid_level": "concat", 00:10:56.543 "superblock": true, 00:10:56.543 "num_base_bdevs": 3, 00:10:56.543 "num_base_bdevs_discovered": 1, 00:10:56.543 "num_base_bdevs_operational": 3, 00:10:56.543 "base_bdevs_list": [ 00:10:56.543 { 00:10:56.543 "name": "pt1", 00:10:56.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.543 "is_configured": true, 00:10:56.543 "data_offset": 2048, 00:10:56.543 "data_size": 63488 00:10:56.543 }, 00:10:56.543 { 00:10:56.543 "name": null, 00:10:56.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.543 "is_configured": false, 00:10:56.543 "data_offset": 2048, 00:10:56.543 "data_size": 63488 00:10:56.543 }, 00:10:56.543 { 00:10:56.543 "name": null, 00:10:56.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.543 "is_configured": false, 00:10:56.543 "data_offset": 2048, 00:10:56.543 "data_size": 63488 00:10:56.543 } 00:10:56.543 ] 00:10:56.543 }' 00:10:56.543 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.543 06:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.110 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:10:57.110 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:10:57.110 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.110 [2024-08-13 06:05:58.849902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.110 [2024-08-13 06:05:58.850070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.110 [2024-08-13 06:05:58.850093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.110 [2024-08-13 06:05:58.850103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.110 [2024-08-13 06:05:58.850503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.110 [2024-08-13 06:05:58.850522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.110 [2024-08-13 06:05:58.850594] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:57.110 [2024-08-13 06:05:58.850617] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.110 pt2 00:10:57.110 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:10:57.110 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:10:57.110 06:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:57.369 [2024-08-13 06:05:59.049571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:57.369 [2024-08-13 06:05:59.049638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.369 [2024-08-13 06:05:59.049657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.369 [2024-08-13 06:05:59.049670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.369 [2024-08-13 06:05:59.050117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.369 [2024-08-13 06:05:59.050144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:57.369 [2024-08-13 06:05:59.050223] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:57.369 [2024-08-13 06:05:59.050248] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:57.369 [2024-08-13 06:05:59.050352] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:57.369 [2024-08-13 06:05:59.050366] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:57.369 [2024-08-13 06:05:59.050643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:57.369 [2024-08-13 06:05:59.050763] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:57.369 [2024-08-13 06:05:59.050772] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:57.369 [2024-08-13 06:05:59.050871] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.369 pt3 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.369 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.628 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.628 "name": "raid_bdev1", 00:10:57.628 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:57.628 "strip_size_kb": 64, 00:10:57.628 "state": "online", 00:10:57.628 "raid_level": "concat", 00:10:57.628 "superblock": true, 00:10:57.628 "num_base_bdevs": 3, 00:10:57.628 "num_base_bdevs_discovered": 3, 00:10:57.628 "num_base_bdevs_operational": 3, 00:10:57.628 "base_bdevs_list": [ 00:10:57.628 { 00:10:57.628 "name": "pt1", 00:10:57.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.628 "is_configured": true, 00:10:57.628 "data_offset": 2048, 00:10:57.628 "data_size": 63488 00:10:57.628 }, 00:10:57.628 { 00:10:57.628 "name": "pt2", 00:10:57.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.628 "is_configured": true, 00:10:57.628 "data_offset": 2048, 00:10:57.628 "data_size": 63488 00:10:57.628 }, 00:10:57.628 { 00:10:57.628 "name": "pt3", 00:10:57.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.628 "is_configured": true, 00:10:57.628 "data_offset": 2048, 00:10:57.628 "data_size": 63488 00:10:57.628 } 00:10:57.628 ] 00:10:57.628 }' 00:10:57.628 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.628 06:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:58.195 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:58.195 [2024-08-13 06:05:59.980340] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.455 06:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:58.455 "name": "raid_bdev1", 00:10:58.455 "aliases": [ 00:10:58.455 "1634d543-653a-401b-90d0-fa1892a1158f" 00:10:58.455 ], 00:10:58.455 "product_name": "Raid Volume", 00:10:58.455 "block_size": 512, 00:10:58.455 "num_blocks": 190464, 00:10:58.455 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:58.455 "assigned_rate_limits": { 00:10:58.455 "rw_ios_per_sec": 0, 00:10:58.455 "rw_mbytes_per_sec": 0, 00:10:58.455 "r_mbytes_per_sec": 0, 00:10:58.455 "w_mbytes_per_sec": 0 00:10:58.455 }, 00:10:58.455 "claimed": false, 00:10:58.455 "zoned": false, 00:10:58.455 "supported_io_types": { 00:10:58.455 "read": true, 00:10:58.455 "write": true, 00:10:58.455 "unmap": true, 00:10:58.455 "flush": true, 00:10:58.455 "reset": true, 00:10:58.455 "nvme_admin": false, 00:10:58.455 "nvme_io": false, 00:10:58.455 "nvme_io_md": false, 00:10:58.456 "write_zeroes": true, 00:10:58.456 "zcopy": false, 00:10:58.456 "get_zone_info": false, 00:10:58.456 "zone_management": false, 00:10:58.456 "zone_append": false, 00:10:58.456 "compare": false, 00:10:58.456 "compare_and_write": false, 00:10:58.456 "abort": false, 00:10:58.456 "seek_hole": false, 00:10:58.456 "seek_data": false, 00:10:58.456 "copy": false, 00:10:58.456 "nvme_iov_md": false 00:10:58.456 }, 00:10:58.456 "memory_domains": [ 00:10:58.456 { 00:10:58.456 "dma_device_id": "system", 00:10:58.456 "dma_device_type": 1 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.456 "dma_device_type": 2 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "dma_device_id": "system", 00:10:58.456 "dma_device_type": 1 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.456 "dma_device_type": 2 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "dma_device_id": "system", 00:10:58.456 "dma_device_type": 1 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.456 "dma_device_type": 2 00:10:58.456 } 00:10:58.456 ], 00:10:58.456 "driver_specific": { 00:10:58.456 "raid": { 00:10:58.456 "uuid": "1634d543-653a-401b-90d0-fa1892a1158f", 00:10:58.456 "strip_size_kb": 64, 00:10:58.456 "state": "online", 00:10:58.456 "raid_level": "concat", 00:10:58.456 "superblock": true, 00:10:58.456 "num_base_bdevs": 3, 00:10:58.456 "num_base_bdevs_discovered": 3, 00:10:58.456 "num_base_bdevs_operational": 3, 00:10:58.456 "base_bdevs_list": [ 00:10:58.456 { 00:10:58.456 "name": "pt1", 00:10:58.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.456 "is_configured": true, 00:10:58.456 "data_offset": 2048, 00:10:58.456 "data_size": 63488 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "name": "pt2", 00:10:58.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.456 "is_configured": true, 00:10:58.456 "data_offset": 2048, 00:10:58.456 "data_size": 63488 00:10:58.456 }, 00:10:58.456 { 00:10:58.456 "name": "pt3", 00:10:58.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.456 "is_configured": true, 00:10:58.456 "data_offset": 2048, 00:10:58.456 "data_size": 63488 00:10:58.456 } 00:10:58.456 ] 00:10:58.456 } 00:10:58.456 } 00:10:58.456 }' 00:10:58.456 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.456 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:58.456 pt2 00:10:58.456 pt3' 00:10:58.456 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:58.456 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:58.456 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:58.715 "name": "pt1", 00:10:58.715 "aliases": [ 00:10:58.715 "00000000-0000-0000-0000-000000000001" 00:10:58.715 ], 00:10:58.715 "product_name": "passthru", 00:10:58.715 "block_size": 512, 00:10:58.715 "num_blocks": 65536, 00:10:58.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.715 "assigned_rate_limits": { 00:10:58.715 "rw_ios_per_sec": 0, 00:10:58.715 "rw_mbytes_per_sec": 0, 00:10:58.715 "r_mbytes_per_sec": 0, 00:10:58.715 "w_mbytes_per_sec": 0 00:10:58.715 }, 00:10:58.715 "claimed": true, 00:10:58.715 "claim_type": "exclusive_write", 00:10:58.715 "zoned": false, 00:10:58.715 "supported_io_types": { 00:10:58.715 "read": true, 00:10:58.715 "write": true, 00:10:58.715 "unmap": true, 00:10:58.715 "flush": true, 00:10:58.715 "reset": true, 00:10:58.715 "nvme_admin": false, 00:10:58.715 "nvme_io": false, 00:10:58.715 "nvme_io_md": false, 00:10:58.715 "write_zeroes": true, 00:10:58.715 "zcopy": true, 00:10:58.715 "get_zone_info": false, 00:10:58.715 "zone_management": false, 00:10:58.715 "zone_append": false, 00:10:58.715 "compare": false, 00:10:58.715 "compare_and_write": false, 00:10:58.715 "abort": true, 00:10:58.715 "seek_hole": false, 00:10:58.715 "seek_data": false, 00:10:58.715 "copy": true, 00:10:58.715 "nvme_iov_md": false 00:10:58.715 }, 00:10:58.715 "memory_domains": [ 00:10:58.715 { 00:10:58.715 "dma_device_id": "system", 00:10:58.715 "dma_device_type": 1 00:10:58.715 }, 00:10:58.715 { 00:10:58.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.715 "dma_device_type": 2 00:10:58.715 } 00:10:58.715 ], 00:10:58.715 "driver_specific": { 00:10:58.715 "passthru": { 00:10:58.715 "name": "pt1", 00:10:58.715 "base_bdev_name": "malloc1" 00:10:58.715 } 00:10:58.715 } 00:10:58.715 }' 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:58.715 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:58.716 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:58.975 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:58.975 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:58.975 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:58.975 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:58.975 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:59.235 "name": "pt2", 00:10:59.235 "aliases": [ 00:10:59.235 "00000000-0000-0000-0000-000000000002" 00:10:59.235 ], 00:10:59.235 "product_name": "passthru", 00:10:59.235 "block_size": 512, 00:10:59.235 "num_blocks": 65536, 00:10:59.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.235 "assigned_rate_limits": { 00:10:59.235 "rw_ios_per_sec": 0, 00:10:59.235 "rw_mbytes_per_sec": 0, 00:10:59.235 "r_mbytes_per_sec": 0, 00:10:59.235 "w_mbytes_per_sec": 0 00:10:59.235 }, 00:10:59.235 "claimed": true, 00:10:59.235 "claim_type": "exclusive_write", 00:10:59.235 "zoned": false, 00:10:59.235 "supported_io_types": { 00:10:59.235 "read": true, 00:10:59.235 "write": true, 00:10:59.235 "unmap": true, 00:10:59.235 "flush": true, 00:10:59.235 "reset": true, 00:10:59.235 "nvme_admin": false, 00:10:59.235 "nvme_io": false, 00:10:59.235 "nvme_io_md": false, 00:10:59.235 "write_zeroes": true, 00:10:59.235 "zcopy": true, 00:10:59.235 "get_zone_info": false, 00:10:59.235 "zone_management": false, 00:10:59.235 "zone_append": false, 00:10:59.235 "compare": false, 00:10:59.235 "compare_and_write": false, 00:10:59.235 "abort": true, 00:10:59.235 "seek_hole": false, 00:10:59.235 "seek_data": false, 00:10:59.235 "copy": true, 00:10:59.235 "nvme_iov_md": false 00:10:59.235 }, 00:10:59.235 "memory_domains": [ 00:10:59.235 { 00:10:59.235 "dma_device_id": "system", 00:10:59.235 "dma_device_type": 1 00:10:59.235 }, 00:10:59.235 { 00:10:59.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.235 "dma_device_type": 2 00:10:59.235 } 00:10:59.235 ], 00:10:59.235 "driver_specific": { 00:10:59.235 "passthru": { 00:10:59.235 "name": "pt2", 00:10:59.235 "base_bdev_name": "malloc2" 00:10:59.235 } 00:10:59.235 } 00:10:59.235 }' 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:59.235 06:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:59.235 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:59.495 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:59.755 "name": "pt3", 00:10:59.755 "aliases": [ 00:10:59.755 "00000000-0000-0000-0000-000000000003" 00:10:59.755 ], 00:10:59.755 "product_name": "passthru", 00:10:59.755 "block_size": 512, 00:10:59.755 "num_blocks": 65536, 00:10:59.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.755 "assigned_rate_limits": { 00:10:59.755 "rw_ios_per_sec": 0, 00:10:59.755 "rw_mbytes_per_sec": 0, 00:10:59.755 "r_mbytes_per_sec": 0, 00:10:59.755 "w_mbytes_per_sec": 0 00:10:59.755 }, 00:10:59.755 "claimed": true, 00:10:59.755 "claim_type": "exclusive_write", 00:10:59.755 "zoned": false, 00:10:59.755 "supported_io_types": { 00:10:59.755 "read": true, 00:10:59.755 "write": true, 00:10:59.755 "unmap": true, 00:10:59.755 "flush": true, 00:10:59.755 "reset": true, 00:10:59.755 "nvme_admin": false, 00:10:59.755 "nvme_io": false, 00:10:59.755 "nvme_io_md": false, 00:10:59.755 "write_zeroes": true, 00:10:59.755 "zcopy": true, 00:10:59.755 "get_zone_info": false, 00:10:59.755 "zone_management": false, 00:10:59.755 "zone_append": false, 00:10:59.755 "compare": false, 00:10:59.755 "compare_and_write": false, 00:10:59.755 "abort": true, 00:10:59.755 "seek_hole": false, 00:10:59.755 "seek_data": false, 00:10:59.755 "copy": true, 00:10:59.755 "nvme_iov_md": false 00:10:59.755 }, 00:10:59.755 "memory_domains": [ 00:10:59.755 { 00:10:59.755 "dma_device_id": "system", 00:10:59.755 "dma_device_type": 1 00:10:59.755 }, 00:10:59.755 { 00:10:59.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.755 "dma_device_type": 2 00:10:59.755 } 00:10:59.755 ], 00:10:59.755 "driver_specific": { 00:10:59.755 "passthru": { 00:10:59.755 "name": "pt3", 00:10:59.755 "base_bdev_name": "malloc3" 00:10:59.755 } 00:10:59.755 } 00:10:59.755 }' 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:59.755 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:00.014 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:00.014 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:00.014 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:00.014 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:00.014 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:00.014 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:11:00.273 [2024-08-13 06:06:01.825167] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 1634d543-653a-401b-90d0-fa1892a1158f '!=' 1634d543-653a-401b-90d0-fa1892a1158f ']' 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 79548 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 79548 ']' 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 79548 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79548 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79548' 00:11:00.273 killing process with pid 79548 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 79548 00:11:00.273 [2024-08-13 06:06:01.885247] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.273 [2024-08-13 06:06:01.885395] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.273 06:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 79548 00:11:00.273 [2024-08-13 06:06:01.885483] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.273 [2024-08-13 06:06:01.885499] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:00.273 [2024-08-13 06:06:01.918114] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.534 06:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:11:00.534 00:11:00.534 real 0m12.225s 00:11:00.534 user 0m22.247s 00:11:00.534 sys 0m1.857s 00:11:00.534 06:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.534 06:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 ************************************ 00:11:00.534 END TEST raid_superblock_test 00:11:00.534 ************************************ 00:11:00.534 06:06:02 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:00.534 06:06:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:00.534 06:06:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.534 06:06:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 ************************************ 00:11:00.534 START TEST raid_read_error_test 00:11:00.534 ************************************ 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 read 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.kgvRdCU5i9 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=79983 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 79983 /var/tmp/spdk-raid.sock 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 79983 ']' 00:11:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:00.534 06:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.795 [2024-08-13 06:06:02.331390] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:11:00.795 [2024-08-13 06:06:02.331582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79983 ] 00:11:00.795 [2024-08-13 06:06:02.478424] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.795 [2024-08-13 06:06:02.523987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.795 [2024-08-13 06:06:02.566377] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.795 [2024-08-13 06:06:02.566414] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.364 06:06:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:01.364 06:06:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:11:01.364 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:01.364 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.625 BaseBdev1_malloc 00:11:01.625 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:01.884 true 00:11:01.884 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.884 [2024-08-13 06:06:03.646191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.884 [2024-08-13 06:06:03.646272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.884 [2024-08-13 06:06:03.646297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:01.884 [2024-08-13 06:06:03.646309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.884 [2024-08-13 06:06:03.648562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.884 [2024-08-13 06:06:03.648608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.884 BaseBdev1 00:11:01.884 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:01.884 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:02.144 BaseBdev2_malloc 00:11:02.144 06:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:02.403 true 00:11:02.403 06:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:02.663 [2024-08-13 06:06:04.238090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:02.663 [2024-08-13 06:06:04.238162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.663 [2024-08-13 06:06:04.238185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:02.663 [2024-08-13 06:06:04.238195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.663 [2024-08-13 06:06:04.240351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.663 [2024-08-13 06:06:04.240459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:02.663 BaseBdev2 00:11:02.663 06:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:02.663 06:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:02.663 BaseBdev3_malloc 00:11:02.923 06:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:02.923 true 00:11:02.923 06:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:03.191 [2024-08-13 06:06:04.792886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:03.191 [2024-08-13 06:06:04.792956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.191 [2024-08-13 06:06:04.792981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:03.191 [2024-08-13 06:06:04.792991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.191 [2024-08-13 06:06:04.795130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.191 [2024-08-13 06:06:04.795168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.191 BaseBdev3 00:11:03.191 06:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:03.459 [2024-08-13 06:06:04.988703] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.460 [2024-08-13 06:06:04.990640] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.460 [2024-08-13 06:06:04.990758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.460 [2024-08-13 06:06:04.990985] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:03.460 [2024-08-13 06:06:04.991048] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:03.460 [2024-08-13 06:06:04.991375] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:03.460 [2024-08-13 06:06:04.991561] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:03.460 [2024-08-13 06:06:04.991607] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:03.460 [2024-08-13 06:06:04.991814] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.460 "name": "raid_bdev1", 00:11:03.460 "uuid": "a4680e7e-6c34-4fe0-97e4-416914e573d7", 00:11:03.460 "strip_size_kb": 64, 00:11:03.460 "state": "online", 00:11:03.460 "raid_level": "concat", 00:11:03.460 "superblock": true, 00:11:03.460 "num_base_bdevs": 3, 00:11:03.460 "num_base_bdevs_discovered": 3, 00:11:03.460 "num_base_bdevs_operational": 3, 00:11:03.460 "base_bdevs_list": [ 00:11:03.460 { 00:11:03.460 "name": "BaseBdev1", 00:11:03.460 "uuid": "29662d8e-ee8f-5494-bad5-9d96bb9f936f", 00:11:03.460 "is_configured": true, 00:11:03.460 "data_offset": 2048, 00:11:03.460 "data_size": 63488 00:11:03.460 }, 00:11:03.460 { 00:11:03.460 "name": "BaseBdev2", 00:11:03.460 "uuid": "c2321ad8-145f-5c8c-8895-f9b4a70f63e9", 00:11:03.460 "is_configured": true, 00:11:03.460 "data_offset": 2048, 00:11:03.460 "data_size": 63488 00:11:03.460 }, 00:11:03.460 { 00:11:03.460 "name": "BaseBdev3", 00:11:03.460 "uuid": "0af551cb-6d0d-502e-a97f-0690d1605f7a", 00:11:03.460 "is_configured": true, 00:11:03.460 "data_offset": 2048, 00:11:03.460 "data_size": 63488 00:11:03.460 } 00:11:03.460 ] 00:11:03.460 }' 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.460 06:06:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.029 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:04.029 06:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:11:04.288 [2024-08-13 06:06:05.883516] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:05.228 06:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.228 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.488 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.488 "name": "raid_bdev1", 00:11:05.488 "uuid": "a4680e7e-6c34-4fe0-97e4-416914e573d7", 00:11:05.488 "strip_size_kb": 64, 00:11:05.488 "state": "online", 00:11:05.488 "raid_level": "concat", 00:11:05.488 "superblock": true, 00:11:05.488 "num_base_bdevs": 3, 00:11:05.488 "num_base_bdevs_discovered": 3, 00:11:05.488 "num_base_bdevs_operational": 3, 00:11:05.488 "base_bdevs_list": [ 00:11:05.488 { 00:11:05.488 "name": "BaseBdev1", 00:11:05.488 "uuid": "29662d8e-ee8f-5494-bad5-9d96bb9f936f", 00:11:05.488 "is_configured": true, 00:11:05.488 "data_offset": 2048, 00:11:05.488 "data_size": 63488 00:11:05.488 }, 00:11:05.488 { 00:11:05.488 "name": "BaseBdev2", 00:11:05.488 "uuid": "c2321ad8-145f-5c8c-8895-f9b4a70f63e9", 00:11:05.488 "is_configured": true, 00:11:05.488 "data_offset": 2048, 00:11:05.488 "data_size": 63488 00:11:05.488 }, 00:11:05.488 { 00:11:05.488 "name": "BaseBdev3", 00:11:05.488 "uuid": "0af551cb-6d0d-502e-a97f-0690d1605f7a", 00:11:05.488 "is_configured": true, 00:11:05.488 "data_offset": 2048, 00:11:05.488 "data_size": 63488 00:11:05.488 } 00:11:05.488 ] 00:11:05.488 }' 00:11:05.488 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.488 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.058 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:06.318 [2024-08-13 06:06:07.897657] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.318 [2024-08-13 06:06:07.897760] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.318 [2024-08-13 06:06:07.900080] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.318 [2024-08-13 06:06:07.900177] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.318 [2024-08-13 06:06:07.900231] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.318 [2024-08-13 06:06:07.900268] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:06.318 0 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 79983 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 79983 ']' 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 79983 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79983 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79983' 00:11:06.318 killing process with pid 79983 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 79983 00:11:06.318 [2024-08-13 06:06:07.944655] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.318 06:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 79983 00:11:06.318 [2024-08-13 06:06:07.969637] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.kgvRdCU5i9 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:11:06.578 00:11:06.578 real 0m5.973s 00:11:06.578 user 0m9.323s 00:11:06.578 sys 0m0.846s 00:11:06.578 ************************************ 00:11:06.578 END TEST raid_read_error_test 00:11:06.578 ************************************ 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.578 06:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.578 06:06:08 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:06.578 06:06:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:06.578 06:06:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.578 06:06:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.578 ************************************ 00:11:06.578 START TEST raid_write_error_test 00:11:06.578 ************************************ 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 write 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.rnTQ1ASypR 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=80161 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 80161 /var/tmp/spdk-raid.sock 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 80161 ']' 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:06.578 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:06.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:06.579 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:06.579 06:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.839 [2024-08-13 06:06:08.372736] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:11:06.839 [2024-08-13 06:06:08.372960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80161 ] 00:11:06.839 [2024-08-13 06:06:08.520260] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.839 [2024-08-13 06:06:08.565615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.839 [2024-08-13 06:06:08.607909] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.839 [2024-08-13 06:06:08.607945] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.409 06:06:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:07.409 06:06:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:11:07.409 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:07.409 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.668 BaseBdev1_malloc 00:11:07.668 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:07.928 true 00:11:07.928 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.188 [2024-08-13 06:06:09.747423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.188 [2024-08-13 06:06:09.747515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.188 [2024-08-13 06:06:09.747538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:08.188 [2024-08-13 06:06:09.747550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.188 [2024-08-13 06:06:09.749875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.188 [2024-08-13 06:06:09.749929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.188 BaseBdev1 00:11:08.188 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:08.188 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.188 BaseBdev2_malloc 00:11:08.188 06:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:08.447 true 00:11:08.447 06:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.706 [2024-08-13 06:06:10.351147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.706 [2024-08-13 06:06:10.351235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.706 [2024-08-13 06:06:10.351258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:08.706 [2024-08-13 06:06:10.351269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.706 [2024-08-13 06:06:10.353392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.706 [2024-08-13 06:06:10.353434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.706 BaseBdev2 00:11:08.706 06:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:08.706 06:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.965 BaseBdev3_malloc 00:11:08.965 06:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:09.225 true 00:11:09.225 06:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.225 [2024-08-13 06:06:10.954975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.225 [2024-08-13 06:06:10.955068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.225 [2024-08-13 06:06:10.955107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:09.225 [2024-08-13 06:06:10.955118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.225 [2024-08-13 06:06:10.957201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.225 [2024-08-13 06:06:10.957242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.225 BaseBdev3 00:11:09.225 06:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:09.484 [2024-08-13 06:06:11.150732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.484 [2024-08-13 06:06:11.152619] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.484 [2024-08-13 06:06:11.152738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.484 [2024-08-13 06:06:11.152965] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:09.484 [2024-08-13 06:06:11.153012] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:09.484 [2024-08-13 06:06:11.153343] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:09.484 [2024-08-13 06:06:11.153524] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:09.484 [2024-08-13 06:06:11.153572] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:09.484 [2024-08-13 06:06:11.153762] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.484 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.744 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.744 "name": "raid_bdev1", 00:11:09.744 "uuid": "3fdaad56-fef1-4d52-8713-f2ed428233cb", 00:11:09.744 "strip_size_kb": 64, 00:11:09.744 "state": "online", 00:11:09.744 "raid_level": "concat", 00:11:09.744 "superblock": true, 00:11:09.744 "num_base_bdevs": 3, 00:11:09.744 "num_base_bdevs_discovered": 3, 00:11:09.744 "num_base_bdevs_operational": 3, 00:11:09.744 "base_bdevs_list": [ 00:11:09.744 { 00:11:09.744 "name": "BaseBdev1", 00:11:09.744 "uuid": "f720d368-4be9-5ab8-be67-3cbc5778955c", 00:11:09.744 "is_configured": true, 00:11:09.744 "data_offset": 2048, 00:11:09.744 "data_size": 63488 00:11:09.744 }, 00:11:09.744 { 00:11:09.744 "name": "BaseBdev2", 00:11:09.744 "uuid": "1c29c1d3-7478-5547-983a-e532b33f9272", 00:11:09.744 "is_configured": true, 00:11:09.744 "data_offset": 2048, 00:11:09.744 "data_size": 63488 00:11:09.744 }, 00:11:09.744 { 00:11:09.744 "name": "BaseBdev3", 00:11:09.744 "uuid": "5bb33b18-f968-51e0-991c-beab2e1938a8", 00:11:09.744 "is_configured": true, 00:11:09.744 "data_offset": 2048, 00:11:09.744 "data_size": 63488 00:11:09.744 } 00:11:09.744 ] 00:11:09.744 }' 00:11:09.744 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.744 06:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.313 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:10.313 06:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:11:10.313 [2024-08-13 06:06:11.993598] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:11.253 06:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.512 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.513 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.513 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.772 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.772 "name": "raid_bdev1", 00:11:11.772 "uuid": "3fdaad56-fef1-4d52-8713-f2ed428233cb", 00:11:11.772 "strip_size_kb": 64, 00:11:11.772 "state": "online", 00:11:11.772 "raid_level": "concat", 00:11:11.772 "superblock": true, 00:11:11.772 "num_base_bdevs": 3, 00:11:11.772 "num_base_bdevs_discovered": 3, 00:11:11.772 "num_base_bdevs_operational": 3, 00:11:11.772 "base_bdevs_list": [ 00:11:11.772 { 00:11:11.772 "name": "BaseBdev1", 00:11:11.772 "uuid": "f720d368-4be9-5ab8-be67-3cbc5778955c", 00:11:11.772 "is_configured": true, 00:11:11.772 "data_offset": 2048, 00:11:11.772 "data_size": 63488 00:11:11.772 }, 00:11:11.772 { 00:11:11.772 "name": "BaseBdev2", 00:11:11.772 "uuid": "1c29c1d3-7478-5547-983a-e532b33f9272", 00:11:11.772 "is_configured": true, 00:11:11.772 "data_offset": 2048, 00:11:11.772 "data_size": 63488 00:11:11.772 }, 00:11:11.772 { 00:11:11.772 "name": "BaseBdev3", 00:11:11.772 "uuid": "5bb33b18-f968-51e0-991c-beab2e1938a8", 00:11:11.772 "is_configured": true, 00:11:11.772 "data_offset": 2048, 00:11:11.772 "data_size": 63488 00:11:11.772 } 00:11:11.772 ] 00:11:11.772 }' 00:11:11.772 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.772 06:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.347 06:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:12.347 [2024-08-13 06:06:14.116306] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.347 [2024-08-13 06:06:14.116423] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.347 [2024-08-13 06:06:14.118741] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.347 [2024-08-13 06:06:14.118821] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.347 [2024-08-13 06:06:14.118871] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.347 [2024-08-13 06:06:14.118906] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:12.347 0 00:11:12.347 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 80161 00:11:12.347 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 80161 ']' 00:11:12.347 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 80161 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80161 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80161' 00:11:12.606 killing process with pid 80161 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 80161 00:11:12.606 [2024-08-13 06:06:14.175072] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.606 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 80161 00:11:12.606 [2024-08-13 06:06:14.200467] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.rnTQ1ASypR 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:11:12.866 00:11:12.866 real 0m6.167s 00:11:12.866 user 0m9.617s 00:11:12.866 sys 0m0.914s 00:11:12.866 ************************************ 00:11:12.866 END TEST raid_write_error_test 00:11:12.866 ************************************ 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.866 06:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.866 06:06:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:11:12.866 06:06:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:12.866 06:06:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:12.866 06:06:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.866 06:06:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.866 ************************************ 00:11:12.866 START TEST raid_state_function_test 00:11:12.866 ************************************ 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=80333 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 80333' 00:11:12.866 Process raid pid: 80333 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 80333 /var/tmp/spdk-raid.sock 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 80333 ']' 00:11:12.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:12.866 06:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.866 [2024-08-13 06:06:14.603997] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:11:12.866 [2024-08-13 06:06:14.604137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.126 [2024-08-13 06:06:14.749545] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.126 [2024-08-13 06:06:14.795005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.126 [2024-08-13 06:06:14.837195] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.126 [2024-08-13 06:06:14.837244] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.695 06:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:13.695 06:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:11:13.695 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:13.955 [2024-08-13 06:06:15.588868] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.955 [2024-08-13 06:06:15.588984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.955 [2024-08-13 06:06:15.589001] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.955 [2024-08-13 06:06:15.589009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.955 [2024-08-13 06:06:15.589020] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.955 [2024-08-13 06:06:15.589041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.955 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.215 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.215 "name": "Existed_Raid", 00:11:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.215 "strip_size_kb": 0, 00:11:14.215 "state": "configuring", 00:11:14.215 "raid_level": "raid1", 00:11:14.215 "superblock": false, 00:11:14.215 "num_base_bdevs": 3, 00:11:14.215 "num_base_bdevs_discovered": 0, 00:11:14.215 "num_base_bdevs_operational": 3, 00:11:14.215 "base_bdevs_list": [ 00:11:14.215 { 00:11:14.215 "name": "BaseBdev1", 00:11:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.215 "is_configured": false, 00:11:14.215 "data_offset": 0, 00:11:14.215 "data_size": 0 00:11:14.215 }, 00:11:14.215 { 00:11:14.215 "name": "BaseBdev2", 00:11:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.215 "is_configured": false, 00:11:14.215 "data_offset": 0, 00:11:14.215 "data_size": 0 00:11:14.215 }, 00:11:14.215 { 00:11:14.215 "name": "BaseBdev3", 00:11:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.215 "is_configured": false, 00:11:14.215 "data_offset": 0, 00:11:14.215 "data_size": 0 00:11:14.215 } 00:11:14.215 ] 00:11:14.215 }' 00:11:14.215 06:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.215 06:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.784 06:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:14.784 [2024-08-13 06:06:16.531182] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.784 [2024-08-13 06:06:16.531292] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:14.784 06:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:15.043 [2024-08-13 06:06:16.730834] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.043 [2024-08-13 06:06:16.730969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.043 [2024-08-13 06:06:16.730999] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.043 [2024-08-13 06:06:16.731018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.043 [2024-08-13 06:06:16.731050] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.043 [2024-08-13 06:06:16.731086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.043 06:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.302 [2024-08-13 06:06:16.935394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.302 BaseBdev1 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:15.302 06:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.562 [ 00:11:15.562 { 00:11:15.562 "name": "BaseBdev1", 00:11:15.562 "aliases": [ 00:11:15.562 "85cb595a-f0b3-4b1d-8336-347c81de1a7d" 00:11:15.562 ], 00:11:15.562 "product_name": "Malloc disk", 00:11:15.562 "block_size": 512, 00:11:15.562 "num_blocks": 65536, 00:11:15.562 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:15.562 "assigned_rate_limits": { 00:11:15.562 "rw_ios_per_sec": 0, 00:11:15.562 "rw_mbytes_per_sec": 0, 00:11:15.562 "r_mbytes_per_sec": 0, 00:11:15.562 "w_mbytes_per_sec": 0 00:11:15.562 }, 00:11:15.562 "claimed": true, 00:11:15.562 "claim_type": "exclusive_write", 00:11:15.562 "zoned": false, 00:11:15.562 "supported_io_types": { 00:11:15.562 "read": true, 00:11:15.562 "write": true, 00:11:15.562 "unmap": true, 00:11:15.562 "flush": true, 00:11:15.562 "reset": true, 00:11:15.562 "nvme_admin": false, 00:11:15.562 "nvme_io": false, 00:11:15.562 "nvme_io_md": false, 00:11:15.562 "write_zeroes": true, 00:11:15.562 "zcopy": true, 00:11:15.562 "get_zone_info": false, 00:11:15.562 "zone_management": false, 00:11:15.562 "zone_append": false, 00:11:15.562 "compare": false, 00:11:15.562 "compare_and_write": false, 00:11:15.562 "abort": true, 00:11:15.562 "seek_hole": false, 00:11:15.562 "seek_data": false, 00:11:15.562 "copy": true, 00:11:15.562 "nvme_iov_md": false 00:11:15.562 }, 00:11:15.562 "memory_domains": [ 00:11:15.562 { 00:11:15.562 "dma_device_id": "system", 00:11:15.562 "dma_device_type": 1 00:11:15.562 }, 00:11:15.562 { 00:11:15.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.562 "dma_device_type": 2 00:11:15.562 } 00:11:15.562 ], 00:11:15.562 "driver_specific": {} 00:11:15.562 } 00:11:15.562 ] 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:15.562 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.563 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.822 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.822 "name": "Existed_Raid", 00:11:15.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.822 "strip_size_kb": 0, 00:11:15.822 "state": "configuring", 00:11:15.822 "raid_level": "raid1", 00:11:15.822 "superblock": false, 00:11:15.822 "num_base_bdevs": 3, 00:11:15.822 "num_base_bdevs_discovered": 1, 00:11:15.822 "num_base_bdevs_operational": 3, 00:11:15.822 "base_bdevs_list": [ 00:11:15.822 { 00:11:15.822 "name": "BaseBdev1", 00:11:15.822 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:15.822 "is_configured": true, 00:11:15.822 "data_offset": 0, 00:11:15.822 "data_size": 65536 00:11:15.822 }, 00:11:15.822 { 00:11:15.822 "name": "BaseBdev2", 00:11:15.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.822 "is_configured": false, 00:11:15.822 "data_offset": 0, 00:11:15.822 "data_size": 0 00:11:15.822 }, 00:11:15.822 { 00:11:15.822 "name": "BaseBdev3", 00:11:15.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.822 "is_configured": false, 00:11:15.822 "data_offset": 0, 00:11:15.822 "data_size": 0 00:11:15.822 } 00:11:15.822 ] 00:11:15.822 }' 00:11:15.822 06:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.822 06:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.390 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:16.649 [2024-08-13 06:06:18.253176] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.649 [2024-08-13 06:06:18.253301] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:16.649 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:16.909 [2024-08-13 06:06:18.448905] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.909 [2024-08-13 06:06:18.450805] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.909 [2024-08-13 06:06:18.450900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.909 [2024-08-13 06:06:18.450953] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.909 [2024-08-13 06:06:18.450991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:16.909 "name": "Existed_Raid", 00:11:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.909 "strip_size_kb": 0, 00:11:16.909 "state": "configuring", 00:11:16.909 "raid_level": "raid1", 00:11:16.909 "superblock": false, 00:11:16.909 "num_base_bdevs": 3, 00:11:16.909 "num_base_bdevs_discovered": 1, 00:11:16.909 "num_base_bdevs_operational": 3, 00:11:16.909 "base_bdevs_list": [ 00:11:16.909 { 00:11:16.909 "name": "BaseBdev1", 00:11:16.909 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:16.909 "is_configured": true, 00:11:16.909 "data_offset": 0, 00:11:16.909 "data_size": 65536 00:11:16.909 }, 00:11:16.909 { 00:11:16.909 "name": "BaseBdev2", 00:11:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.909 "is_configured": false, 00:11:16.909 "data_offset": 0, 00:11:16.909 "data_size": 0 00:11:16.909 }, 00:11:16.909 { 00:11:16.909 "name": "BaseBdev3", 00:11:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.909 "is_configured": false, 00:11:16.909 "data_offset": 0, 00:11:16.909 "data_size": 0 00:11:16.909 } 00:11:16.909 ] 00:11:16.909 }' 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:16.909 06:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.535 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.794 [2024-08-13 06:06:19.304087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.794 BaseBdev2 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.794 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.056 [ 00:11:18.056 { 00:11:18.056 "name": "BaseBdev2", 00:11:18.056 "aliases": [ 00:11:18.056 "7b25fa7d-98eb-4890-ae86-843a5824b96a" 00:11:18.056 ], 00:11:18.056 "product_name": "Malloc disk", 00:11:18.056 "block_size": 512, 00:11:18.056 "num_blocks": 65536, 00:11:18.056 "uuid": "7b25fa7d-98eb-4890-ae86-843a5824b96a", 00:11:18.056 "assigned_rate_limits": { 00:11:18.056 "rw_ios_per_sec": 0, 00:11:18.056 "rw_mbytes_per_sec": 0, 00:11:18.056 "r_mbytes_per_sec": 0, 00:11:18.056 "w_mbytes_per_sec": 0 00:11:18.056 }, 00:11:18.056 "claimed": true, 00:11:18.056 "claim_type": "exclusive_write", 00:11:18.056 "zoned": false, 00:11:18.056 "supported_io_types": { 00:11:18.056 "read": true, 00:11:18.056 "write": true, 00:11:18.056 "unmap": true, 00:11:18.056 "flush": true, 00:11:18.056 "reset": true, 00:11:18.056 "nvme_admin": false, 00:11:18.056 "nvme_io": false, 00:11:18.056 "nvme_io_md": false, 00:11:18.056 "write_zeroes": true, 00:11:18.056 "zcopy": true, 00:11:18.056 "get_zone_info": false, 00:11:18.056 "zone_management": false, 00:11:18.056 "zone_append": false, 00:11:18.056 "compare": false, 00:11:18.056 "compare_and_write": false, 00:11:18.056 "abort": true, 00:11:18.056 "seek_hole": false, 00:11:18.056 "seek_data": false, 00:11:18.056 "copy": true, 00:11:18.056 "nvme_iov_md": false 00:11:18.056 }, 00:11:18.056 "memory_domains": [ 00:11:18.056 { 00:11:18.056 "dma_device_id": "system", 00:11:18.056 "dma_device_type": 1 00:11:18.056 }, 00:11:18.056 { 00:11:18.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.056 "dma_device_type": 2 00:11:18.056 } 00:11:18.056 ], 00:11:18.056 "driver_specific": {} 00:11:18.056 } 00:11:18.056 ] 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.056 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.338 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.338 "name": "Existed_Raid", 00:11:18.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.338 "strip_size_kb": 0, 00:11:18.338 "state": "configuring", 00:11:18.338 "raid_level": "raid1", 00:11:18.338 "superblock": false, 00:11:18.338 "num_base_bdevs": 3, 00:11:18.338 "num_base_bdevs_discovered": 2, 00:11:18.338 "num_base_bdevs_operational": 3, 00:11:18.338 "base_bdevs_list": [ 00:11:18.338 { 00:11:18.338 "name": "BaseBdev1", 00:11:18.338 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:18.338 "is_configured": true, 00:11:18.338 "data_offset": 0, 00:11:18.338 "data_size": 65536 00:11:18.338 }, 00:11:18.338 { 00:11:18.338 "name": "BaseBdev2", 00:11:18.338 "uuid": "7b25fa7d-98eb-4890-ae86-843a5824b96a", 00:11:18.338 "is_configured": true, 00:11:18.338 "data_offset": 0, 00:11:18.338 "data_size": 65536 00:11:18.338 }, 00:11:18.338 { 00:11:18.338 "name": "BaseBdev3", 00:11:18.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.338 "is_configured": false, 00:11:18.338 "data_offset": 0, 00:11:18.338 "data_size": 0 00:11:18.338 } 00:11:18.338 ] 00:11:18.338 }' 00:11:18.338 06:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.338 06:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.915 [2024-08-13 06:06:20.668853] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.915 [2024-08-13 06:06:20.668910] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:18.915 [2024-08-13 06:06:20.668918] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:18.915 [2024-08-13 06:06:20.669207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:18.915 [2024-08-13 06:06:20.669341] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:18.915 [2024-08-13 06:06:20.669359] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:18.915 [2024-08-13 06:06:20.669577] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.915 BaseBdev3 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:18.915 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:19.174 06:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.433 [ 00:11:19.433 { 00:11:19.433 "name": "BaseBdev3", 00:11:19.433 "aliases": [ 00:11:19.433 "513d8b36-d6e4-4526-ab58-eb313bbac087" 00:11:19.433 ], 00:11:19.433 "product_name": "Malloc disk", 00:11:19.433 "block_size": 512, 00:11:19.433 "num_blocks": 65536, 00:11:19.433 "uuid": "513d8b36-d6e4-4526-ab58-eb313bbac087", 00:11:19.433 "assigned_rate_limits": { 00:11:19.433 "rw_ios_per_sec": 0, 00:11:19.433 "rw_mbytes_per_sec": 0, 00:11:19.433 "r_mbytes_per_sec": 0, 00:11:19.433 "w_mbytes_per_sec": 0 00:11:19.433 }, 00:11:19.433 "claimed": true, 00:11:19.433 "claim_type": "exclusive_write", 00:11:19.433 "zoned": false, 00:11:19.433 "supported_io_types": { 00:11:19.433 "read": true, 00:11:19.433 "write": true, 00:11:19.433 "unmap": true, 00:11:19.433 "flush": true, 00:11:19.433 "reset": true, 00:11:19.433 "nvme_admin": false, 00:11:19.433 "nvme_io": false, 00:11:19.433 "nvme_io_md": false, 00:11:19.433 "write_zeroes": true, 00:11:19.433 "zcopy": true, 00:11:19.433 "get_zone_info": false, 00:11:19.433 "zone_management": false, 00:11:19.433 "zone_append": false, 00:11:19.433 "compare": false, 00:11:19.433 "compare_and_write": false, 00:11:19.433 "abort": true, 00:11:19.433 "seek_hole": false, 00:11:19.433 "seek_data": false, 00:11:19.433 "copy": true, 00:11:19.433 "nvme_iov_md": false 00:11:19.433 }, 00:11:19.433 "memory_domains": [ 00:11:19.434 { 00:11:19.434 "dma_device_id": "system", 00:11:19.434 "dma_device_type": 1 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.434 "dma_device_type": 2 00:11:19.434 } 00:11:19.434 ], 00:11:19.434 "driver_specific": {} 00:11:19.434 } 00:11:19.434 ] 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.434 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.693 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.693 "name": "Existed_Raid", 00:11:19.693 "uuid": "3f87b030-64c5-402a-9110-fe508c73c2c7", 00:11:19.693 "strip_size_kb": 0, 00:11:19.693 "state": "online", 00:11:19.693 "raid_level": "raid1", 00:11:19.693 "superblock": false, 00:11:19.693 "num_base_bdevs": 3, 00:11:19.693 "num_base_bdevs_discovered": 3, 00:11:19.693 "num_base_bdevs_operational": 3, 00:11:19.693 "base_bdevs_list": [ 00:11:19.693 { 00:11:19.693 "name": "BaseBdev1", 00:11:19.693 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:19.693 "is_configured": true, 00:11:19.693 "data_offset": 0, 00:11:19.693 "data_size": 65536 00:11:19.693 }, 00:11:19.693 { 00:11:19.693 "name": "BaseBdev2", 00:11:19.693 "uuid": "7b25fa7d-98eb-4890-ae86-843a5824b96a", 00:11:19.693 "is_configured": true, 00:11:19.693 "data_offset": 0, 00:11:19.693 "data_size": 65536 00:11:19.693 }, 00:11:19.693 { 00:11:19.693 "name": "BaseBdev3", 00:11:19.693 "uuid": "513d8b36-d6e4-4526-ab58-eb313bbac087", 00:11:19.693 "is_configured": true, 00:11:19.693 "data_offset": 0, 00:11:19.693 "data_size": 65536 00:11:19.693 } 00:11:19.693 ] 00:11:19.693 }' 00:11:19.693 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.693 06:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:20.262 06:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:20.262 [2024-08-13 06:06:22.007101] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.262 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:20.262 "name": "Existed_Raid", 00:11:20.262 "aliases": [ 00:11:20.262 "3f87b030-64c5-402a-9110-fe508c73c2c7" 00:11:20.262 ], 00:11:20.262 "product_name": "Raid Volume", 00:11:20.262 "block_size": 512, 00:11:20.262 "num_blocks": 65536, 00:11:20.262 "uuid": "3f87b030-64c5-402a-9110-fe508c73c2c7", 00:11:20.262 "assigned_rate_limits": { 00:11:20.262 "rw_ios_per_sec": 0, 00:11:20.262 "rw_mbytes_per_sec": 0, 00:11:20.262 "r_mbytes_per_sec": 0, 00:11:20.262 "w_mbytes_per_sec": 0 00:11:20.262 }, 00:11:20.262 "claimed": false, 00:11:20.262 "zoned": false, 00:11:20.262 "supported_io_types": { 00:11:20.262 "read": true, 00:11:20.262 "write": true, 00:11:20.262 "unmap": false, 00:11:20.262 "flush": false, 00:11:20.262 "reset": true, 00:11:20.262 "nvme_admin": false, 00:11:20.262 "nvme_io": false, 00:11:20.262 "nvme_io_md": false, 00:11:20.262 "write_zeroes": true, 00:11:20.262 "zcopy": false, 00:11:20.262 "get_zone_info": false, 00:11:20.262 "zone_management": false, 00:11:20.262 "zone_append": false, 00:11:20.262 "compare": false, 00:11:20.262 "compare_and_write": false, 00:11:20.262 "abort": false, 00:11:20.262 "seek_hole": false, 00:11:20.262 "seek_data": false, 00:11:20.262 "copy": false, 00:11:20.262 "nvme_iov_md": false 00:11:20.262 }, 00:11:20.262 "memory_domains": [ 00:11:20.262 { 00:11:20.262 "dma_device_id": "system", 00:11:20.262 "dma_device_type": 1 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.262 "dma_device_type": 2 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "dma_device_id": "system", 00:11:20.262 "dma_device_type": 1 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.262 "dma_device_type": 2 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "dma_device_id": "system", 00:11:20.262 "dma_device_type": 1 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.262 "dma_device_type": 2 00:11:20.262 } 00:11:20.262 ], 00:11:20.262 "driver_specific": { 00:11:20.262 "raid": { 00:11:20.262 "uuid": "3f87b030-64c5-402a-9110-fe508c73c2c7", 00:11:20.262 "strip_size_kb": 0, 00:11:20.262 "state": "online", 00:11:20.262 "raid_level": "raid1", 00:11:20.262 "superblock": false, 00:11:20.262 "num_base_bdevs": 3, 00:11:20.262 "num_base_bdevs_discovered": 3, 00:11:20.262 "num_base_bdevs_operational": 3, 00:11:20.262 "base_bdevs_list": [ 00:11:20.262 { 00:11:20.262 "name": "BaseBdev1", 00:11:20.262 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:20.262 "is_configured": true, 00:11:20.262 "data_offset": 0, 00:11:20.262 "data_size": 65536 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "name": "BaseBdev2", 00:11:20.262 "uuid": "7b25fa7d-98eb-4890-ae86-843a5824b96a", 00:11:20.262 "is_configured": true, 00:11:20.262 "data_offset": 0, 00:11:20.262 "data_size": 65536 00:11:20.262 }, 00:11:20.262 { 00:11:20.262 "name": "BaseBdev3", 00:11:20.262 "uuid": "513d8b36-d6e4-4526-ab58-eb313bbac087", 00:11:20.262 "is_configured": true, 00:11:20.262 "data_offset": 0, 00:11:20.262 "data_size": 65536 00:11:20.262 } 00:11:20.262 ] 00:11:20.262 } 00:11:20.262 } 00:11:20.262 }' 00:11:20.262 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:20.522 BaseBdev2 00:11:20.522 BaseBdev3' 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:20.522 "name": "BaseBdev1", 00:11:20.522 "aliases": [ 00:11:20.522 "85cb595a-f0b3-4b1d-8336-347c81de1a7d" 00:11:20.522 ], 00:11:20.522 "product_name": "Malloc disk", 00:11:20.522 "block_size": 512, 00:11:20.522 "num_blocks": 65536, 00:11:20.522 "uuid": "85cb595a-f0b3-4b1d-8336-347c81de1a7d", 00:11:20.522 "assigned_rate_limits": { 00:11:20.522 "rw_ios_per_sec": 0, 00:11:20.522 "rw_mbytes_per_sec": 0, 00:11:20.522 "r_mbytes_per_sec": 0, 00:11:20.522 "w_mbytes_per_sec": 0 00:11:20.522 }, 00:11:20.522 "claimed": true, 00:11:20.522 "claim_type": "exclusive_write", 00:11:20.522 "zoned": false, 00:11:20.522 "supported_io_types": { 00:11:20.522 "read": true, 00:11:20.522 "write": true, 00:11:20.522 "unmap": true, 00:11:20.522 "flush": true, 00:11:20.522 "reset": true, 00:11:20.522 "nvme_admin": false, 00:11:20.522 "nvme_io": false, 00:11:20.522 "nvme_io_md": false, 00:11:20.522 "write_zeroes": true, 00:11:20.522 "zcopy": true, 00:11:20.522 "get_zone_info": false, 00:11:20.522 "zone_management": false, 00:11:20.522 "zone_append": false, 00:11:20.522 "compare": false, 00:11:20.522 "compare_and_write": false, 00:11:20.522 "abort": true, 00:11:20.522 "seek_hole": false, 00:11:20.522 "seek_data": false, 00:11:20.522 "copy": true, 00:11:20.522 "nvme_iov_md": false 00:11:20.522 }, 00:11:20.522 "memory_domains": [ 00:11:20.522 { 00:11:20.522 "dma_device_id": "system", 00:11:20.522 "dma_device_type": 1 00:11:20.522 }, 00:11:20.522 { 00:11:20.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.522 "dma_device_type": 2 00:11:20.522 } 00:11:20.522 ], 00:11:20.522 "driver_specific": {} 00:11:20.522 }' 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:20.522 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:20.782 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.041 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.041 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.041 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:21.042 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:21.042 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.042 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.042 "name": "BaseBdev2", 00:11:21.042 "aliases": [ 00:11:21.042 "7b25fa7d-98eb-4890-ae86-843a5824b96a" 00:11:21.042 ], 00:11:21.042 "product_name": "Malloc disk", 00:11:21.042 "block_size": 512, 00:11:21.042 "num_blocks": 65536, 00:11:21.042 "uuid": "7b25fa7d-98eb-4890-ae86-843a5824b96a", 00:11:21.042 "assigned_rate_limits": { 00:11:21.042 "rw_ios_per_sec": 0, 00:11:21.042 "rw_mbytes_per_sec": 0, 00:11:21.042 "r_mbytes_per_sec": 0, 00:11:21.042 "w_mbytes_per_sec": 0 00:11:21.042 }, 00:11:21.042 "claimed": true, 00:11:21.042 "claim_type": "exclusive_write", 00:11:21.042 "zoned": false, 00:11:21.042 "supported_io_types": { 00:11:21.042 "read": true, 00:11:21.042 "write": true, 00:11:21.042 "unmap": true, 00:11:21.042 "flush": true, 00:11:21.042 "reset": true, 00:11:21.042 "nvme_admin": false, 00:11:21.042 "nvme_io": false, 00:11:21.042 "nvme_io_md": false, 00:11:21.042 "write_zeroes": true, 00:11:21.042 "zcopy": true, 00:11:21.042 "get_zone_info": false, 00:11:21.042 "zone_management": false, 00:11:21.042 "zone_append": false, 00:11:21.042 "compare": false, 00:11:21.042 "compare_and_write": false, 00:11:21.042 "abort": true, 00:11:21.042 "seek_hole": false, 00:11:21.042 "seek_data": false, 00:11:21.042 "copy": true, 00:11:21.042 "nvme_iov_md": false 00:11:21.042 }, 00:11:21.042 "memory_domains": [ 00:11:21.042 { 00:11:21.042 "dma_device_id": "system", 00:11:21.042 "dma_device_type": 1 00:11:21.042 }, 00:11:21.042 { 00:11:21.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.042 "dma_device_type": 2 00:11:21.042 } 00:11:21.042 ], 00:11:21.042 "driver_specific": {} 00:11:21.042 }' 00:11:21.042 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.301 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.301 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.301 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.301 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.301 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.301 06:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.301 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.301 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.301 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.561 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.561 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:21.561 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:21.561 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:21.561 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:21.561 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:21.561 "name": "BaseBdev3", 00:11:21.561 "aliases": [ 00:11:21.561 "513d8b36-d6e4-4526-ab58-eb313bbac087" 00:11:21.561 ], 00:11:21.561 "product_name": "Malloc disk", 00:11:21.561 "block_size": 512, 00:11:21.561 "num_blocks": 65536, 00:11:21.561 "uuid": "513d8b36-d6e4-4526-ab58-eb313bbac087", 00:11:21.561 "assigned_rate_limits": { 00:11:21.562 "rw_ios_per_sec": 0, 00:11:21.562 "rw_mbytes_per_sec": 0, 00:11:21.562 "r_mbytes_per_sec": 0, 00:11:21.562 "w_mbytes_per_sec": 0 00:11:21.562 }, 00:11:21.562 "claimed": true, 00:11:21.562 "claim_type": "exclusive_write", 00:11:21.562 "zoned": false, 00:11:21.562 "supported_io_types": { 00:11:21.562 "read": true, 00:11:21.562 "write": true, 00:11:21.562 "unmap": true, 00:11:21.562 "flush": true, 00:11:21.562 "reset": true, 00:11:21.562 "nvme_admin": false, 00:11:21.562 "nvme_io": false, 00:11:21.562 "nvme_io_md": false, 00:11:21.562 "write_zeroes": true, 00:11:21.562 "zcopy": true, 00:11:21.562 "get_zone_info": false, 00:11:21.562 "zone_management": false, 00:11:21.562 "zone_append": false, 00:11:21.562 "compare": false, 00:11:21.562 "compare_and_write": false, 00:11:21.562 "abort": true, 00:11:21.562 "seek_hole": false, 00:11:21.562 "seek_data": false, 00:11:21.562 "copy": true, 00:11:21.562 "nvme_iov_md": false 00:11:21.562 }, 00:11:21.562 "memory_domains": [ 00:11:21.562 { 00:11:21.562 "dma_device_id": "system", 00:11:21.562 "dma_device_type": 1 00:11:21.562 }, 00:11:21.562 { 00:11:21.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.562 "dma_device_type": 2 00:11:21.562 } 00:11:21.562 ], 00:11:21.562 "driver_specific": {} 00:11:21.562 }' 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:21.821 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:22.081 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:22.081 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:22.081 [2024-08-13 06:06:23.847786] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.341 06:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.341 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.341 "name": "Existed_Raid", 00:11:22.341 "uuid": "3f87b030-64c5-402a-9110-fe508c73c2c7", 00:11:22.341 "strip_size_kb": 0, 00:11:22.341 "state": "online", 00:11:22.341 "raid_level": "raid1", 00:11:22.341 "superblock": false, 00:11:22.341 "num_base_bdevs": 3, 00:11:22.341 "num_base_bdevs_discovered": 2, 00:11:22.341 "num_base_bdevs_operational": 2, 00:11:22.341 "base_bdevs_list": [ 00:11:22.341 { 00:11:22.341 "name": null, 00:11:22.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.341 "is_configured": false, 00:11:22.341 "data_offset": 0, 00:11:22.341 "data_size": 65536 00:11:22.341 }, 00:11:22.341 { 00:11:22.341 "name": "BaseBdev2", 00:11:22.341 "uuid": "7b25fa7d-98eb-4890-ae86-843a5824b96a", 00:11:22.341 "is_configured": true, 00:11:22.341 "data_offset": 0, 00:11:22.341 "data_size": 65536 00:11:22.341 }, 00:11:22.341 { 00:11:22.341 "name": "BaseBdev3", 00:11:22.341 "uuid": "513d8b36-d6e4-4526-ab58-eb313bbac087", 00:11:22.341 "is_configured": true, 00:11:22.341 "data_offset": 0, 00:11:22.341 "data_size": 65536 00:11:22.341 } 00:11:22.341 ] 00:11:22.341 }' 00:11:22.341 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.341 06:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.909 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:22.909 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:22.909 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.909 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:23.168 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:23.168 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.168 06:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:23.427 [2024-08-13 06:06:25.037091] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.427 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:23.427 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:23.427 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.427 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:23.687 [2024-08-13 06:06:25.439359] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.687 [2024-08-13 06:06:25.439539] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.687 [2024-08-13 06:06:25.450877] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.687 [2024-08-13 06:06:25.450931] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.687 [2024-08-13 06:06:25.450946] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:23.687 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.947 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:23.947 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:23.947 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:23.947 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:23.947 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:23.947 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.206 BaseBdev2 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:24.206 06:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:24.466 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.466 [ 00:11:24.466 { 00:11:24.466 "name": "BaseBdev2", 00:11:24.466 "aliases": [ 00:11:24.466 "da30adbf-6b37-4c7e-b0b8-e86b787769ab" 00:11:24.466 ], 00:11:24.466 "product_name": "Malloc disk", 00:11:24.466 "block_size": 512, 00:11:24.466 "num_blocks": 65536, 00:11:24.466 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:24.466 "assigned_rate_limits": { 00:11:24.466 "rw_ios_per_sec": 0, 00:11:24.466 "rw_mbytes_per_sec": 0, 00:11:24.466 "r_mbytes_per_sec": 0, 00:11:24.466 "w_mbytes_per_sec": 0 00:11:24.466 }, 00:11:24.466 "claimed": false, 00:11:24.466 "zoned": false, 00:11:24.466 "supported_io_types": { 00:11:24.466 "read": true, 00:11:24.466 "write": true, 00:11:24.466 "unmap": true, 00:11:24.466 "flush": true, 00:11:24.466 "reset": true, 00:11:24.466 "nvme_admin": false, 00:11:24.466 "nvme_io": false, 00:11:24.466 "nvme_io_md": false, 00:11:24.466 "write_zeroes": true, 00:11:24.466 "zcopy": true, 00:11:24.466 "get_zone_info": false, 00:11:24.466 "zone_management": false, 00:11:24.466 "zone_append": false, 00:11:24.466 "compare": false, 00:11:24.466 "compare_and_write": false, 00:11:24.466 "abort": true, 00:11:24.466 "seek_hole": false, 00:11:24.466 "seek_data": false, 00:11:24.466 "copy": true, 00:11:24.466 "nvme_iov_md": false 00:11:24.466 }, 00:11:24.466 "memory_domains": [ 00:11:24.466 { 00:11:24.466 "dma_device_id": "system", 00:11:24.466 "dma_device_type": 1 00:11:24.466 }, 00:11:24.466 { 00:11:24.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.466 "dma_device_type": 2 00:11:24.466 } 00:11:24.466 ], 00:11:24.466 "driver_specific": {} 00:11:24.466 } 00:11:24.466 ] 00:11:24.466 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:24.466 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:24.466 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:24.466 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.726 BaseBdev3 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:24.726 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:24.986 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.245 [ 00:11:25.245 { 00:11:25.245 "name": "BaseBdev3", 00:11:25.245 "aliases": [ 00:11:25.245 "79b512e4-49ca-4d01-bb3f-0513843bd51e" 00:11:25.245 ], 00:11:25.245 "product_name": "Malloc disk", 00:11:25.245 "block_size": 512, 00:11:25.245 "num_blocks": 65536, 00:11:25.245 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:25.245 "assigned_rate_limits": { 00:11:25.245 "rw_ios_per_sec": 0, 00:11:25.245 "rw_mbytes_per_sec": 0, 00:11:25.245 "r_mbytes_per_sec": 0, 00:11:25.245 "w_mbytes_per_sec": 0 00:11:25.245 }, 00:11:25.245 "claimed": false, 00:11:25.245 "zoned": false, 00:11:25.245 "supported_io_types": { 00:11:25.245 "read": true, 00:11:25.245 "write": true, 00:11:25.245 "unmap": true, 00:11:25.245 "flush": true, 00:11:25.245 "reset": true, 00:11:25.245 "nvme_admin": false, 00:11:25.245 "nvme_io": false, 00:11:25.245 "nvme_io_md": false, 00:11:25.245 "write_zeroes": true, 00:11:25.245 "zcopy": true, 00:11:25.245 "get_zone_info": false, 00:11:25.245 "zone_management": false, 00:11:25.245 "zone_append": false, 00:11:25.245 "compare": false, 00:11:25.245 "compare_and_write": false, 00:11:25.245 "abort": true, 00:11:25.245 "seek_hole": false, 00:11:25.245 "seek_data": false, 00:11:25.245 "copy": true, 00:11:25.245 "nvme_iov_md": false 00:11:25.245 }, 00:11:25.245 "memory_domains": [ 00:11:25.245 { 00:11:25.245 "dma_device_id": "system", 00:11:25.245 "dma_device_type": 1 00:11:25.245 }, 00:11:25.245 { 00:11:25.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.245 "dma_device_type": 2 00:11:25.245 } 00:11:25.245 ], 00:11:25.245 "driver_specific": {} 00:11:25.245 } 00:11:25.245 ] 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:25.245 [2024-08-13 06:06:26.973771] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.245 [2024-08-13 06:06:26.973825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.245 [2024-08-13 06:06:26.973854] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.245 [2024-08-13 06:06:26.975696] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.245 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.246 06:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.505 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.505 "name": "Existed_Raid", 00:11:25.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.505 "strip_size_kb": 0, 00:11:25.505 "state": "configuring", 00:11:25.505 "raid_level": "raid1", 00:11:25.505 "superblock": false, 00:11:25.505 "num_base_bdevs": 3, 00:11:25.505 "num_base_bdevs_discovered": 2, 00:11:25.505 "num_base_bdevs_operational": 3, 00:11:25.505 "base_bdevs_list": [ 00:11:25.505 { 00:11:25.505 "name": "BaseBdev1", 00:11:25.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.505 "is_configured": false, 00:11:25.505 "data_offset": 0, 00:11:25.505 "data_size": 0 00:11:25.505 }, 00:11:25.505 { 00:11:25.505 "name": "BaseBdev2", 00:11:25.505 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:25.505 "is_configured": true, 00:11:25.505 "data_offset": 0, 00:11:25.505 "data_size": 65536 00:11:25.505 }, 00:11:25.505 { 00:11:25.505 "name": "BaseBdev3", 00:11:25.505 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:25.505 "is_configured": true, 00:11:25.505 "data_offset": 0, 00:11:25.505 "data_size": 65536 00:11:25.505 } 00:11:25.505 ] 00:11:25.505 }' 00:11:25.505 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.505 06:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:26.333 [2024-08-13 06:06:27.888189] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.333 06:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.333 06:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:26.333 "name": "Existed_Raid", 00:11:26.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.333 "strip_size_kb": 0, 00:11:26.333 "state": "configuring", 00:11:26.333 "raid_level": "raid1", 00:11:26.333 "superblock": false, 00:11:26.333 "num_base_bdevs": 3, 00:11:26.333 "num_base_bdevs_discovered": 1, 00:11:26.333 "num_base_bdevs_operational": 3, 00:11:26.333 "base_bdevs_list": [ 00:11:26.333 { 00:11:26.333 "name": "BaseBdev1", 00:11:26.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.333 "is_configured": false, 00:11:26.333 "data_offset": 0, 00:11:26.333 "data_size": 0 00:11:26.333 }, 00:11:26.333 { 00:11:26.333 "name": null, 00:11:26.333 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:26.333 "is_configured": false, 00:11:26.333 "data_offset": 0, 00:11:26.333 "data_size": 65536 00:11:26.333 }, 00:11:26.333 { 00:11:26.333 "name": "BaseBdev3", 00:11:26.333 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:26.334 "is_configured": true, 00:11:26.334 "data_offset": 0, 00:11:26.334 "data_size": 65536 00:11:26.334 } 00:11:26.334 ] 00:11:26.334 }' 00:11:26.334 06:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:26.334 06:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 06:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.902 06:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.162 06:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:27.162 06:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.422 [2024-08-13 06:06:29.005325] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.422 BaseBdev1 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:27.422 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.681 [ 00:11:27.681 { 00:11:27.681 "name": "BaseBdev1", 00:11:27.681 "aliases": [ 00:11:27.681 "fc651bff-8450-4aa8-b8c1-ffec47e2e320" 00:11:27.681 ], 00:11:27.681 "product_name": "Malloc disk", 00:11:27.681 "block_size": 512, 00:11:27.681 "num_blocks": 65536, 00:11:27.681 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:27.681 "assigned_rate_limits": { 00:11:27.681 "rw_ios_per_sec": 0, 00:11:27.681 "rw_mbytes_per_sec": 0, 00:11:27.681 "r_mbytes_per_sec": 0, 00:11:27.681 "w_mbytes_per_sec": 0 00:11:27.681 }, 00:11:27.681 "claimed": true, 00:11:27.681 "claim_type": "exclusive_write", 00:11:27.681 "zoned": false, 00:11:27.681 "supported_io_types": { 00:11:27.681 "read": true, 00:11:27.681 "write": true, 00:11:27.681 "unmap": true, 00:11:27.681 "flush": true, 00:11:27.681 "reset": true, 00:11:27.681 "nvme_admin": false, 00:11:27.681 "nvme_io": false, 00:11:27.681 "nvme_io_md": false, 00:11:27.681 "write_zeroes": true, 00:11:27.681 "zcopy": true, 00:11:27.681 "get_zone_info": false, 00:11:27.681 "zone_management": false, 00:11:27.681 "zone_append": false, 00:11:27.681 "compare": false, 00:11:27.681 "compare_and_write": false, 00:11:27.681 "abort": true, 00:11:27.681 "seek_hole": false, 00:11:27.681 "seek_data": false, 00:11:27.681 "copy": true, 00:11:27.681 "nvme_iov_md": false 00:11:27.681 }, 00:11:27.681 "memory_domains": [ 00:11:27.681 { 00:11:27.681 "dma_device_id": "system", 00:11:27.681 "dma_device_type": 1 00:11:27.681 }, 00:11:27.681 { 00:11:27.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.681 "dma_device_type": 2 00:11:27.681 } 00:11:27.681 ], 00:11:27.681 "driver_specific": {} 00:11:27.681 } 00:11:27.681 ] 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.681 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.940 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.940 "name": "Existed_Raid", 00:11:27.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.940 "strip_size_kb": 0, 00:11:27.940 "state": "configuring", 00:11:27.940 "raid_level": "raid1", 00:11:27.940 "superblock": false, 00:11:27.940 "num_base_bdevs": 3, 00:11:27.940 "num_base_bdevs_discovered": 2, 00:11:27.940 "num_base_bdevs_operational": 3, 00:11:27.940 "base_bdevs_list": [ 00:11:27.940 { 00:11:27.940 "name": "BaseBdev1", 00:11:27.940 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:27.940 "is_configured": true, 00:11:27.940 "data_offset": 0, 00:11:27.940 "data_size": 65536 00:11:27.940 }, 00:11:27.940 { 00:11:27.940 "name": null, 00:11:27.940 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:27.940 "is_configured": false, 00:11:27.940 "data_offset": 0, 00:11:27.940 "data_size": 65536 00:11:27.940 }, 00:11:27.940 { 00:11:27.940 "name": "BaseBdev3", 00:11:27.940 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:27.940 "is_configured": true, 00:11:27.940 "data_offset": 0, 00:11:27.940 "data_size": 65536 00:11:27.940 } 00:11:27.940 ] 00:11:27.940 }' 00:11:27.940 06:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.940 06:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.508 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.508 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:28.768 [2024-08-13 06:06:30.494974] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.768 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.027 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.027 "name": "Existed_Raid", 00:11:29.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.027 "strip_size_kb": 0, 00:11:29.027 "state": "configuring", 00:11:29.027 "raid_level": "raid1", 00:11:29.027 "superblock": false, 00:11:29.027 "num_base_bdevs": 3, 00:11:29.027 "num_base_bdevs_discovered": 1, 00:11:29.027 "num_base_bdevs_operational": 3, 00:11:29.027 "base_bdevs_list": [ 00:11:29.027 { 00:11:29.027 "name": "BaseBdev1", 00:11:29.027 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:29.027 "is_configured": true, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 65536 00:11:29.027 }, 00:11:29.027 { 00:11:29.027 "name": null, 00:11:29.027 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:29.027 "is_configured": false, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 65536 00:11:29.027 }, 00:11:29.027 { 00:11:29.027 "name": null, 00:11:29.027 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:29.027 "is_configured": false, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 65536 00:11:29.027 } 00:11:29.027 ] 00:11:29.027 }' 00:11:29.027 06:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.027 06:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.596 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:29.855 [2024-08-13 06:06:31.581158] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.855 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.115 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:30.115 "name": "Existed_Raid", 00:11:30.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.115 "strip_size_kb": 0, 00:11:30.115 "state": "configuring", 00:11:30.115 "raid_level": "raid1", 00:11:30.115 "superblock": false, 00:11:30.115 "num_base_bdevs": 3, 00:11:30.115 "num_base_bdevs_discovered": 2, 00:11:30.115 "num_base_bdevs_operational": 3, 00:11:30.115 "base_bdevs_list": [ 00:11:30.115 { 00:11:30.115 "name": "BaseBdev1", 00:11:30.115 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:30.115 "is_configured": true, 00:11:30.115 "data_offset": 0, 00:11:30.115 "data_size": 65536 00:11:30.115 }, 00:11:30.115 { 00:11:30.115 "name": null, 00:11:30.115 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:30.115 "is_configured": false, 00:11:30.115 "data_offset": 0, 00:11:30.115 "data_size": 65536 00:11:30.115 }, 00:11:30.115 { 00:11:30.115 "name": "BaseBdev3", 00:11:30.115 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:30.115 "is_configured": true, 00:11:30.115 "data_offset": 0, 00:11:30.115 "data_size": 65536 00:11:30.115 } 00:11:30.115 ] 00:11:30.115 }' 00:11:30.115 06:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:30.115 06:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.684 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.684 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.945 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:30.945 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:30.946 [2024-08-13 06:06:32.727357] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.237 "name": "Existed_Raid", 00:11:31.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.237 "strip_size_kb": 0, 00:11:31.237 "state": "configuring", 00:11:31.237 "raid_level": "raid1", 00:11:31.237 "superblock": false, 00:11:31.237 "num_base_bdevs": 3, 00:11:31.237 "num_base_bdevs_discovered": 1, 00:11:31.237 "num_base_bdevs_operational": 3, 00:11:31.237 "base_bdevs_list": [ 00:11:31.237 { 00:11:31.237 "name": null, 00:11:31.237 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:31.237 "is_configured": false, 00:11:31.237 "data_offset": 0, 00:11:31.237 "data_size": 65536 00:11:31.237 }, 00:11:31.237 { 00:11:31.237 "name": null, 00:11:31.237 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:31.237 "is_configured": false, 00:11:31.237 "data_offset": 0, 00:11:31.237 "data_size": 65536 00:11:31.237 }, 00:11:31.237 { 00:11:31.237 "name": "BaseBdev3", 00:11:31.237 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:31.237 "is_configured": true, 00:11:31.237 "data_offset": 0, 00:11:31.237 "data_size": 65536 00:11:31.237 } 00:11:31.237 ] 00:11:31.237 }' 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.237 06:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.806 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.806 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.072 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:32.072 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:32.333 [2024-08-13 06:06:33.880254] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.333 06:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.333 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.333 "name": "Existed_Raid", 00:11:32.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.333 "strip_size_kb": 0, 00:11:32.333 "state": "configuring", 00:11:32.333 "raid_level": "raid1", 00:11:32.333 "superblock": false, 00:11:32.333 "num_base_bdevs": 3, 00:11:32.333 "num_base_bdevs_discovered": 2, 00:11:32.333 "num_base_bdevs_operational": 3, 00:11:32.333 "base_bdevs_list": [ 00:11:32.333 { 00:11:32.333 "name": null, 00:11:32.333 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:32.333 "is_configured": false, 00:11:32.333 "data_offset": 0, 00:11:32.333 "data_size": 65536 00:11:32.333 }, 00:11:32.333 { 00:11:32.333 "name": "BaseBdev2", 00:11:32.333 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:32.333 "is_configured": true, 00:11:32.333 "data_offset": 0, 00:11:32.333 "data_size": 65536 00:11:32.333 }, 00:11:32.333 { 00:11:32.333 "name": "BaseBdev3", 00:11:32.333 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:32.333 "is_configured": true, 00:11:32.333 "data_offset": 0, 00:11:32.333 "data_size": 65536 00:11:32.333 } 00:11:32.333 ] 00:11:32.333 }' 00:11:32.333 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.333 06:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.902 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.902 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.162 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:33.162 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.162 06:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:33.421 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fc651bff-8450-4aa8-b8c1-ffec47e2e320 00:11:33.680 [2024-08-13 06:06:35.221208] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:33.680 [2024-08-13 06:06:35.221255] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:33.680 [2024-08-13 06:06:35.221265] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:33.680 [2024-08-13 06:06:35.221511] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:33.680 [2024-08-13 06:06:35.221633] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:33.680 [2024-08-13 06:06:35.221641] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:33.680 [2024-08-13 06:06:35.221822] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.680 NewBaseBdev 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.680 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:33.940 [ 00:11:33.940 { 00:11:33.940 "name": "NewBaseBdev", 00:11:33.940 "aliases": [ 00:11:33.940 "fc651bff-8450-4aa8-b8c1-ffec47e2e320" 00:11:33.940 ], 00:11:33.940 "product_name": "Malloc disk", 00:11:33.940 "block_size": 512, 00:11:33.940 "num_blocks": 65536, 00:11:33.940 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:33.940 "assigned_rate_limits": { 00:11:33.940 "rw_ios_per_sec": 0, 00:11:33.940 "rw_mbytes_per_sec": 0, 00:11:33.940 "r_mbytes_per_sec": 0, 00:11:33.940 "w_mbytes_per_sec": 0 00:11:33.940 }, 00:11:33.940 "claimed": true, 00:11:33.940 "claim_type": "exclusive_write", 00:11:33.940 "zoned": false, 00:11:33.940 "supported_io_types": { 00:11:33.940 "read": true, 00:11:33.940 "write": true, 00:11:33.940 "unmap": true, 00:11:33.940 "flush": true, 00:11:33.940 "reset": true, 00:11:33.940 "nvme_admin": false, 00:11:33.940 "nvme_io": false, 00:11:33.940 "nvme_io_md": false, 00:11:33.940 "write_zeroes": true, 00:11:33.940 "zcopy": true, 00:11:33.940 "get_zone_info": false, 00:11:33.940 "zone_management": false, 00:11:33.940 "zone_append": false, 00:11:33.940 "compare": false, 00:11:33.940 "compare_and_write": false, 00:11:33.940 "abort": true, 00:11:33.940 "seek_hole": false, 00:11:33.940 "seek_data": false, 00:11:33.940 "copy": true, 00:11:33.940 "nvme_iov_md": false 00:11:33.940 }, 00:11:33.940 "memory_domains": [ 00:11:33.940 { 00:11:33.940 "dma_device_id": "system", 00:11:33.940 "dma_device_type": 1 00:11:33.940 }, 00:11:33.940 { 00:11:33.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.940 "dma_device_type": 2 00:11:33.940 } 00:11:33.940 ], 00:11:33.940 "driver_specific": {} 00:11:33.940 } 00:11:33.940 ] 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.940 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.199 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.199 "name": "Existed_Raid", 00:11:34.199 "uuid": "e631c2f7-93b9-437e-be95-554af8f6a5ce", 00:11:34.199 "strip_size_kb": 0, 00:11:34.199 "state": "online", 00:11:34.199 "raid_level": "raid1", 00:11:34.199 "superblock": false, 00:11:34.199 "num_base_bdevs": 3, 00:11:34.199 "num_base_bdevs_discovered": 3, 00:11:34.199 "num_base_bdevs_operational": 3, 00:11:34.199 "base_bdevs_list": [ 00:11:34.199 { 00:11:34.199 "name": "NewBaseBdev", 00:11:34.199 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:34.199 "is_configured": true, 00:11:34.199 "data_offset": 0, 00:11:34.199 "data_size": 65536 00:11:34.199 }, 00:11:34.199 { 00:11:34.199 "name": "BaseBdev2", 00:11:34.199 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:34.200 "is_configured": true, 00:11:34.200 "data_offset": 0, 00:11:34.200 "data_size": 65536 00:11:34.200 }, 00:11:34.200 { 00:11:34.200 "name": "BaseBdev3", 00:11:34.200 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:34.200 "is_configured": true, 00:11:34.200 "data_offset": 0, 00:11:34.200 "data_size": 65536 00:11:34.200 } 00:11:34.200 ] 00:11:34.200 }' 00:11:34.200 06:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.200 06:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:34.768 [2024-08-13 06:06:36.503431] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:34.768 "name": "Existed_Raid", 00:11:34.768 "aliases": [ 00:11:34.768 "e631c2f7-93b9-437e-be95-554af8f6a5ce" 00:11:34.768 ], 00:11:34.768 "product_name": "Raid Volume", 00:11:34.768 "block_size": 512, 00:11:34.768 "num_blocks": 65536, 00:11:34.768 "uuid": "e631c2f7-93b9-437e-be95-554af8f6a5ce", 00:11:34.768 "assigned_rate_limits": { 00:11:34.768 "rw_ios_per_sec": 0, 00:11:34.768 "rw_mbytes_per_sec": 0, 00:11:34.768 "r_mbytes_per_sec": 0, 00:11:34.768 "w_mbytes_per_sec": 0 00:11:34.768 }, 00:11:34.768 "claimed": false, 00:11:34.768 "zoned": false, 00:11:34.768 "supported_io_types": { 00:11:34.768 "read": true, 00:11:34.768 "write": true, 00:11:34.768 "unmap": false, 00:11:34.768 "flush": false, 00:11:34.768 "reset": true, 00:11:34.768 "nvme_admin": false, 00:11:34.768 "nvme_io": false, 00:11:34.768 "nvme_io_md": false, 00:11:34.768 "write_zeroes": true, 00:11:34.768 "zcopy": false, 00:11:34.768 "get_zone_info": false, 00:11:34.768 "zone_management": false, 00:11:34.768 "zone_append": false, 00:11:34.768 "compare": false, 00:11:34.768 "compare_and_write": false, 00:11:34.768 "abort": false, 00:11:34.768 "seek_hole": false, 00:11:34.768 "seek_data": false, 00:11:34.768 "copy": false, 00:11:34.768 "nvme_iov_md": false 00:11:34.768 }, 00:11:34.768 "memory_domains": [ 00:11:34.768 { 00:11:34.768 "dma_device_id": "system", 00:11:34.768 "dma_device_type": 1 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.768 "dma_device_type": 2 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "dma_device_id": "system", 00:11:34.768 "dma_device_type": 1 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.768 "dma_device_type": 2 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "dma_device_id": "system", 00:11:34.768 "dma_device_type": 1 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.768 "dma_device_type": 2 00:11:34.768 } 00:11:34.768 ], 00:11:34.768 "driver_specific": { 00:11:34.768 "raid": { 00:11:34.768 "uuid": "e631c2f7-93b9-437e-be95-554af8f6a5ce", 00:11:34.768 "strip_size_kb": 0, 00:11:34.768 "state": "online", 00:11:34.768 "raid_level": "raid1", 00:11:34.768 "superblock": false, 00:11:34.768 "num_base_bdevs": 3, 00:11:34.768 "num_base_bdevs_discovered": 3, 00:11:34.768 "num_base_bdevs_operational": 3, 00:11:34.768 "base_bdevs_list": [ 00:11:34.768 { 00:11:34.768 "name": "NewBaseBdev", 00:11:34.768 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:34.768 "is_configured": true, 00:11:34.768 "data_offset": 0, 00:11:34.768 "data_size": 65536 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "name": "BaseBdev2", 00:11:34.768 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:34.768 "is_configured": true, 00:11:34.768 "data_offset": 0, 00:11:34.768 "data_size": 65536 00:11:34.768 }, 00:11:34.768 { 00:11:34.768 "name": "BaseBdev3", 00:11:34.768 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:34.768 "is_configured": true, 00:11:34.768 "data_offset": 0, 00:11:34.768 "data_size": 65536 00:11:34.768 } 00:11:34.768 ] 00:11:34.768 } 00:11:34.768 } 00:11:34.768 }' 00:11:34.768 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:35.028 BaseBdev2 00:11:35.028 BaseBdev3' 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.028 "name": "NewBaseBdev", 00:11:35.028 "aliases": [ 00:11:35.028 "fc651bff-8450-4aa8-b8c1-ffec47e2e320" 00:11:35.028 ], 00:11:35.028 "product_name": "Malloc disk", 00:11:35.028 "block_size": 512, 00:11:35.028 "num_blocks": 65536, 00:11:35.028 "uuid": "fc651bff-8450-4aa8-b8c1-ffec47e2e320", 00:11:35.028 "assigned_rate_limits": { 00:11:35.028 "rw_ios_per_sec": 0, 00:11:35.028 "rw_mbytes_per_sec": 0, 00:11:35.028 "r_mbytes_per_sec": 0, 00:11:35.028 "w_mbytes_per_sec": 0 00:11:35.028 }, 00:11:35.028 "claimed": true, 00:11:35.028 "claim_type": "exclusive_write", 00:11:35.028 "zoned": false, 00:11:35.028 "supported_io_types": { 00:11:35.028 "read": true, 00:11:35.028 "write": true, 00:11:35.028 "unmap": true, 00:11:35.028 "flush": true, 00:11:35.028 "reset": true, 00:11:35.028 "nvme_admin": false, 00:11:35.028 "nvme_io": false, 00:11:35.028 "nvme_io_md": false, 00:11:35.028 "write_zeroes": true, 00:11:35.028 "zcopy": true, 00:11:35.028 "get_zone_info": false, 00:11:35.028 "zone_management": false, 00:11:35.028 "zone_append": false, 00:11:35.028 "compare": false, 00:11:35.028 "compare_and_write": false, 00:11:35.028 "abort": true, 00:11:35.028 "seek_hole": false, 00:11:35.028 "seek_data": false, 00:11:35.028 "copy": true, 00:11:35.028 "nvme_iov_md": false 00:11:35.028 }, 00:11:35.028 "memory_domains": [ 00:11:35.028 { 00:11:35.028 "dma_device_id": "system", 00:11:35.028 "dma_device_type": 1 00:11:35.028 }, 00:11:35.028 { 00:11:35.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.028 "dma_device_type": 2 00:11:35.028 } 00:11:35.028 ], 00:11:35.028 "driver_specific": {} 00:11:35.028 }' 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.028 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.288 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.288 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.288 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.288 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.288 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.288 06:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.288 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.288 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.288 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.548 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.548 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.548 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:35.548 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.548 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.548 "name": "BaseBdev2", 00:11:35.548 "aliases": [ 00:11:35.548 "da30adbf-6b37-4c7e-b0b8-e86b787769ab" 00:11:35.548 ], 00:11:35.548 "product_name": "Malloc disk", 00:11:35.548 "block_size": 512, 00:11:35.548 "num_blocks": 65536, 00:11:35.548 "uuid": "da30adbf-6b37-4c7e-b0b8-e86b787769ab", 00:11:35.548 "assigned_rate_limits": { 00:11:35.548 "rw_ios_per_sec": 0, 00:11:35.548 "rw_mbytes_per_sec": 0, 00:11:35.548 "r_mbytes_per_sec": 0, 00:11:35.548 "w_mbytes_per_sec": 0 00:11:35.548 }, 00:11:35.548 "claimed": true, 00:11:35.548 "claim_type": "exclusive_write", 00:11:35.548 "zoned": false, 00:11:35.548 "supported_io_types": { 00:11:35.548 "read": true, 00:11:35.548 "write": true, 00:11:35.548 "unmap": true, 00:11:35.548 "flush": true, 00:11:35.548 "reset": true, 00:11:35.548 "nvme_admin": false, 00:11:35.548 "nvme_io": false, 00:11:35.548 "nvme_io_md": false, 00:11:35.548 "write_zeroes": true, 00:11:35.548 "zcopy": true, 00:11:35.548 "get_zone_info": false, 00:11:35.548 "zone_management": false, 00:11:35.548 "zone_append": false, 00:11:35.548 "compare": false, 00:11:35.548 "compare_and_write": false, 00:11:35.548 "abort": true, 00:11:35.548 "seek_hole": false, 00:11:35.548 "seek_data": false, 00:11:35.548 "copy": true, 00:11:35.548 "nvme_iov_md": false 00:11:35.548 }, 00:11:35.548 "memory_domains": [ 00:11:35.548 { 00:11:35.548 "dma_device_id": "system", 00:11:35.548 "dma_device_type": 1 00:11:35.548 }, 00:11:35.548 { 00:11:35.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.548 "dma_device_type": 2 00:11:35.548 } 00:11:35.548 ], 00:11:35.548 "driver_specific": {} 00:11:35.548 }' 00:11:35.548 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.808 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.068 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.068 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:36.068 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:36.068 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:36.068 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:36.329 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:36.329 "name": "BaseBdev3", 00:11:36.329 "aliases": [ 00:11:36.329 "79b512e4-49ca-4d01-bb3f-0513843bd51e" 00:11:36.329 ], 00:11:36.329 "product_name": "Malloc disk", 00:11:36.329 "block_size": 512, 00:11:36.329 "num_blocks": 65536, 00:11:36.329 "uuid": "79b512e4-49ca-4d01-bb3f-0513843bd51e", 00:11:36.329 "assigned_rate_limits": { 00:11:36.329 "rw_ios_per_sec": 0, 00:11:36.329 "rw_mbytes_per_sec": 0, 00:11:36.329 "r_mbytes_per_sec": 0, 00:11:36.329 "w_mbytes_per_sec": 0 00:11:36.329 }, 00:11:36.329 "claimed": true, 00:11:36.329 "claim_type": "exclusive_write", 00:11:36.329 "zoned": false, 00:11:36.329 "supported_io_types": { 00:11:36.329 "read": true, 00:11:36.329 "write": true, 00:11:36.329 "unmap": true, 00:11:36.329 "flush": true, 00:11:36.329 "reset": true, 00:11:36.329 "nvme_admin": false, 00:11:36.329 "nvme_io": false, 00:11:36.329 "nvme_io_md": false, 00:11:36.329 "write_zeroes": true, 00:11:36.329 "zcopy": true, 00:11:36.329 "get_zone_info": false, 00:11:36.329 "zone_management": false, 00:11:36.329 "zone_append": false, 00:11:36.329 "compare": false, 00:11:36.329 "compare_and_write": false, 00:11:36.329 "abort": true, 00:11:36.329 "seek_hole": false, 00:11:36.329 "seek_data": false, 00:11:36.329 "copy": true, 00:11:36.329 "nvme_iov_md": false 00:11:36.329 }, 00:11:36.329 "memory_domains": [ 00:11:36.329 { 00:11:36.329 "dma_device_id": "system", 00:11:36.329 "dma_device_type": 1 00:11:36.329 }, 00:11:36.329 { 00:11:36.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.329 "dma_device_type": 2 00:11:36.329 } 00:11:36.329 ], 00:11:36.329 "driver_specific": {} 00:11:36.329 }' 00:11:36.329 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:36.329 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:36.329 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:36.329 06:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:36.329 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:36.329 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:36.329 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:36.329 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:36.589 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:36.589 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.589 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.589 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:36.589 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:36.849 [2024-08-13 06:06:38.431883] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.849 [2024-08-13 06:06:38.431974] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.849 [2024-08-13 06:06:38.432101] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.849 [2024-08-13 06:06:38.432387] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.849 [2024-08-13 06:06:38.432442] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 80333 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 80333 ']' 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 80333 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80333 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80333' 00:11:36.849 killing process with pid 80333 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 80333 00:11:36.849 [2024-08-13 06:06:38.492621] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.849 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 80333 00:11:36.849 [2024-08-13 06:06:38.524257] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:37.110 00:11:37.110 real 0m24.247s 00:11:37.110 user 0m45.167s 00:11:37.110 sys 0m3.687s 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:37.110 ************************************ 00:11:37.110 END TEST raid_state_function_test 00:11:37.110 ************************************ 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.110 06:06:38 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:37.110 06:06:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:37.110 06:06:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:37.110 06:06:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.110 ************************************ 00:11:37.110 START TEST raid_state_function_test_sb 00:11:37.110 ************************************ 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:37.110 Process raid pid: 81229 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=81229 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 81229' 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 81229 /var/tmp/spdk-raid.sock 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 81229 ']' 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:37.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:37.110 06:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.370 [2024-08-13 06:06:38.924599] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:11:37.370 [2024-08-13 06:06:38.924757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.370 [2024-08-13 06:06:39.070705] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.370 [2024-08-13 06:06:39.118252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.630 [2024-08-13 06:06:39.161751] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.630 [2024-08-13 06:06:39.161793] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:38.200 [2024-08-13 06:06:39.926072] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.200 [2024-08-13 06:06:39.926130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.200 [2024-08-13 06:06:39.926144] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.200 [2024-08-13 06:06:39.926152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.200 [2024-08-13 06:06:39.926162] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:38.200 [2024-08-13 06:06:39.926169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.200 06:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.460 06:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.460 "name": "Existed_Raid", 00:11:38.460 "uuid": "aed69350-7afb-4bf7-a2cc-9b8d8b54bc19", 00:11:38.460 "strip_size_kb": 0, 00:11:38.460 "state": "configuring", 00:11:38.460 "raid_level": "raid1", 00:11:38.460 "superblock": true, 00:11:38.460 "num_base_bdevs": 3, 00:11:38.460 "num_base_bdevs_discovered": 0, 00:11:38.460 "num_base_bdevs_operational": 3, 00:11:38.460 "base_bdevs_list": [ 00:11:38.460 { 00:11:38.460 "name": "BaseBdev1", 00:11:38.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.460 "is_configured": false, 00:11:38.460 "data_offset": 0, 00:11:38.460 "data_size": 0 00:11:38.460 }, 00:11:38.461 { 00:11:38.461 "name": "BaseBdev2", 00:11:38.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.461 "is_configured": false, 00:11:38.461 "data_offset": 0, 00:11:38.461 "data_size": 0 00:11:38.461 }, 00:11:38.461 { 00:11:38.461 "name": "BaseBdev3", 00:11:38.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.461 "is_configured": false, 00:11:38.461 "data_offset": 0, 00:11:38.461 "data_size": 0 00:11:38.461 } 00:11:38.461 ] 00:11:38.461 }' 00:11:38.461 06:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.461 06:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.029 06:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:39.289 [2024-08-13 06:06:40.852392] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:39.289 [2024-08-13 06:06:40.852510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:39.289 06:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:39.289 [2024-08-13 06:06:41.024144] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.289 [2024-08-13 06:06:41.024278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.289 [2024-08-13 06:06:41.024307] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.289 [2024-08-13 06:06:41.024327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.289 [2024-08-13 06:06:41.024347] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.289 [2024-08-13 06:06:41.024365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.289 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.548 [2024-08-13 06:06:41.204764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.548 BaseBdev1 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:39.548 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.813 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.093 [ 00:11:40.093 { 00:11:40.093 "name": "BaseBdev1", 00:11:40.093 "aliases": [ 00:11:40.093 "50ff5795-f722-40be-99cf-86da7e52172d" 00:11:40.093 ], 00:11:40.093 "product_name": "Malloc disk", 00:11:40.093 "block_size": 512, 00:11:40.093 "num_blocks": 65536, 00:11:40.093 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:40.093 "assigned_rate_limits": { 00:11:40.093 "rw_ios_per_sec": 0, 00:11:40.093 "rw_mbytes_per_sec": 0, 00:11:40.093 "r_mbytes_per_sec": 0, 00:11:40.093 "w_mbytes_per_sec": 0 00:11:40.093 }, 00:11:40.093 "claimed": true, 00:11:40.093 "claim_type": "exclusive_write", 00:11:40.093 "zoned": false, 00:11:40.093 "supported_io_types": { 00:11:40.093 "read": true, 00:11:40.093 "write": true, 00:11:40.093 "unmap": true, 00:11:40.093 "flush": true, 00:11:40.093 "reset": true, 00:11:40.093 "nvme_admin": false, 00:11:40.093 "nvme_io": false, 00:11:40.093 "nvme_io_md": false, 00:11:40.093 "write_zeroes": true, 00:11:40.093 "zcopy": true, 00:11:40.093 "get_zone_info": false, 00:11:40.093 "zone_management": false, 00:11:40.093 "zone_append": false, 00:11:40.093 "compare": false, 00:11:40.093 "compare_and_write": false, 00:11:40.093 "abort": true, 00:11:40.093 "seek_hole": false, 00:11:40.093 "seek_data": false, 00:11:40.093 "copy": true, 00:11:40.093 "nvme_iov_md": false 00:11:40.093 }, 00:11:40.093 "memory_domains": [ 00:11:40.093 { 00:11:40.093 "dma_device_id": "system", 00:11:40.093 "dma_device_type": 1 00:11:40.093 }, 00:11:40.093 { 00:11:40.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.093 "dma_device_type": 2 00:11:40.093 } 00:11:40.093 ], 00:11:40.093 "driver_specific": {} 00:11:40.093 } 00:11:40.093 ] 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.093 "name": "Existed_Raid", 00:11:40.093 "uuid": "f526fc7c-a598-4413-bdb7-f8950d40cb9a", 00:11:40.093 "strip_size_kb": 0, 00:11:40.093 "state": "configuring", 00:11:40.093 "raid_level": "raid1", 00:11:40.093 "superblock": true, 00:11:40.093 "num_base_bdevs": 3, 00:11:40.093 "num_base_bdevs_discovered": 1, 00:11:40.093 "num_base_bdevs_operational": 3, 00:11:40.093 "base_bdevs_list": [ 00:11:40.093 { 00:11:40.093 "name": "BaseBdev1", 00:11:40.093 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:40.093 "is_configured": true, 00:11:40.093 "data_offset": 2048, 00:11:40.093 "data_size": 63488 00:11:40.093 }, 00:11:40.093 { 00:11:40.093 "name": "BaseBdev2", 00:11:40.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.093 "is_configured": false, 00:11:40.093 "data_offset": 0, 00:11:40.093 "data_size": 0 00:11:40.093 }, 00:11:40.093 { 00:11:40.093 "name": "BaseBdev3", 00:11:40.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.093 "is_configured": false, 00:11:40.093 "data_offset": 0, 00:11:40.093 "data_size": 0 00:11:40.093 } 00:11:40.093 ] 00:11:40.093 }' 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.093 06:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.661 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:40.921 [2024-08-13 06:06:42.510780] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.921 [2024-08-13 06:06:42.510840] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:40.921 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:40.921 [2024-08-13 06:06:42.706488] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.921 [2024-08-13 06:06:42.708351] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.921 [2024-08-13 06:06:42.708466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.921 [2024-08-13 06:06:42.708483] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.921 [2024-08-13 06:06:42.708507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.181 "name": "Existed_Raid", 00:11:41.181 "uuid": "06dcda46-88af-4658-a1d1-8ff7aab85642", 00:11:41.181 "strip_size_kb": 0, 00:11:41.181 "state": "configuring", 00:11:41.181 "raid_level": "raid1", 00:11:41.181 "superblock": true, 00:11:41.181 "num_base_bdevs": 3, 00:11:41.181 "num_base_bdevs_discovered": 1, 00:11:41.181 "num_base_bdevs_operational": 3, 00:11:41.181 "base_bdevs_list": [ 00:11:41.181 { 00:11:41.181 "name": "BaseBdev1", 00:11:41.181 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:41.181 "is_configured": true, 00:11:41.181 "data_offset": 2048, 00:11:41.181 "data_size": 63488 00:11:41.181 }, 00:11:41.181 { 00:11:41.181 "name": "BaseBdev2", 00:11:41.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.181 "is_configured": false, 00:11:41.181 "data_offset": 0, 00:11:41.181 "data_size": 0 00:11:41.181 }, 00:11:41.181 { 00:11:41.181 "name": "BaseBdev3", 00:11:41.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.181 "is_configured": false, 00:11:41.181 "data_offset": 0, 00:11:41.181 "data_size": 0 00:11:41.181 } 00:11:41.181 ] 00:11:41.181 }' 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.181 06:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.749 06:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.012 [2024-08-13 06:06:43.632942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.012 BaseBdev2 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:42.012 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:42.272 06:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.272 [ 00:11:42.272 { 00:11:42.272 "name": "BaseBdev2", 00:11:42.272 "aliases": [ 00:11:42.272 "12a51761-4c44-4d2a-bea2-42cccb37caed" 00:11:42.272 ], 00:11:42.272 "product_name": "Malloc disk", 00:11:42.272 "block_size": 512, 00:11:42.272 "num_blocks": 65536, 00:11:42.272 "uuid": "12a51761-4c44-4d2a-bea2-42cccb37caed", 00:11:42.272 "assigned_rate_limits": { 00:11:42.272 "rw_ios_per_sec": 0, 00:11:42.272 "rw_mbytes_per_sec": 0, 00:11:42.272 "r_mbytes_per_sec": 0, 00:11:42.272 "w_mbytes_per_sec": 0 00:11:42.272 }, 00:11:42.272 "claimed": true, 00:11:42.272 "claim_type": "exclusive_write", 00:11:42.272 "zoned": false, 00:11:42.272 "supported_io_types": { 00:11:42.272 "read": true, 00:11:42.272 "write": true, 00:11:42.272 "unmap": true, 00:11:42.272 "flush": true, 00:11:42.272 "reset": true, 00:11:42.272 "nvme_admin": false, 00:11:42.272 "nvme_io": false, 00:11:42.272 "nvme_io_md": false, 00:11:42.272 "write_zeroes": true, 00:11:42.272 "zcopy": true, 00:11:42.272 "get_zone_info": false, 00:11:42.272 "zone_management": false, 00:11:42.272 "zone_append": false, 00:11:42.272 "compare": false, 00:11:42.272 "compare_and_write": false, 00:11:42.272 "abort": true, 00:11:42.272 "seek_hole": false, 00:11:42.272 "seek_data": false, 00:11:42.272 "copy": true, 00:11:42.272 "nvme_iov_md": false 00:11:42.272 }, 00:11:42.272 "memory_domains": [ 00:11:42.272 { 00:11:42.272 "dma_device_id": "system", 00:11:42.272 "dma_device_type": 1 00:11:42.272 }, 00:11:42.272 { 00:11:42.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.272 "dma_device_type": 2 00:11:42.272 } 00:11:42.272 ], 00:11:42.272 "driver_specific": {} 00:11:42.272 } 00:11:42.272 ] 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.272 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.531 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.531 "name": "Existed_Raid", 00:11:42.531 "uuid": "06dcda46-88af-4658-a1d1-8ff7aab85642", 00:11:42.531 "strip_size_kb": 0, 00:11:42.531 "state": "configuring", 00:11:42.531 "raid_level": "raid1", 00:11:42.531 "superblock": true, 00:11:42.531 "num_base_bdevs": 3, 00:11:42.531 "num_base_bdevs_discovered": 2, 00:11:42.531 "num_base_bdevs_operational": 3, 00:11:42.531 "base_bdevs_list": [ 00:11:42.531 { 00:11:42.531 "name": "BaseBdev1", 00:11:42.531 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:42.531 "is_configured": true, 00:11:42.531 "data_offset": 2048, 00:11:42.531 "data_size": 63488 00:11:42.531 }, 00:11:42.531 { 00:11:42.531 "name": "BaseBdev2", 00:11:42.531 "uuid": "12a51761-4c44-4d2a-bea2-42cccb37caed", 00:11:42.531 "is_configured": true, 00:11:42.531 "data_offset": 2048, 00:11:42.531 "data_size": 63488 00:11:42.531 }, 00:11:42.531 { 00:11:42.531 "name": "BaseBdev3", 00:11:42.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.531 "is_configured": false, 00:11:42.531 "data_offset": 0, 00:11:42.531 "data_size": 0 00:11:42.531 } 00:11:42.531 ] 00:11:42.531 }' 00:11:42.531 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.531 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.099 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:43.357 [2024-08-13 06:06:44.930344] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.357 [2024-08-13 06:06:44.930554] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:43.357 [2024-08-13 06:06:44.930570] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.357 [2024-08-13 06:06:44.930872] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:43.357 [2024-08-13 06:06:44.931006] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:43.357 [2024-08-13 06:06:44.931023] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:43.357 [2024-08-13 06:06:44.931156] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.357 BaseBdev3 00:11:43.357 06:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:43.358 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:43.358 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:43.358 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:43.358 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:43.358 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:43.358 06:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.358 06:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.617 [ 00:11:43.617 { 00:11:43.617 "name": "BaseBdev3", 00:11:43.617 "aliases": [ 00:11:43.617 "980bb172-ad5d-415f-b2c6-9739a649c343" 00:11:43.617 ], 00:11:43.617 "product_name": "Malloc disk", 00:11:43.617 "block_size": 512, 00:11:43.617 "num_blocks": 65536, 00:11:43.617 "uuid": "980bb172-ad5d-415f-b2c6-9739a649c343", 00:11:43.617 "assigned_rate_limits": { 00:11:43.617 "rw_ios_per_sec": 0, 00:11:43.617 "rw_mbytes_per_sec": 0, 00:11:43.617 "r_mbytes_per_sec": 0, 00:11:43.617 "w_mbytes_per_sec": 0 00:11:43.617 }, 00:11:43.617 "claimed": true, 00:11:43.617 "claim_type": "exclusive_write", 00:11:43.617 "zoned": false, 00:11:43.617 "supported_io_types": { 00:11:43.617 "read": true, 00:11:43.617 "write": true, 00:11:43.617 "unmap": true, 00:11:43.617 "flush": true, 00:11:43.617 "reset": true, 00:11:43.617 "nvme_admin": false, 00:11:43.617 "nvme_io": false, 00:11:43.617 "nvme_io_md": false, 00:11:43.617 "write_zeroes": true, 00:11:43.617 "zcopy": true, 00:11:43.617 "get_zone_info": false, 00:11:43.617 "zone_management": false, 00:11:43.617 "zone_append": false, 00:11:43.617 "compare": false, 00:11:43.617 "compare_and_write": false, 00:11:43.617 "abort": true, 00:11:43.617 "seek_hole": false, 00:11:43.617 "seek_data": false, 00:11:43.617 "copy": true, 00:11:43.617 "nvme_iov_md": false 00:11:43.617 }, 00:11:43.617 "memory_domains": [ 00:11:43.617 { 00:11:43.617 "dma_device_id": "system", 00:11:43.617 "dma_device_type": 1 00:11:43.617 }, 00:11:43.617 { 00:11:43.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.617 "dma_device_type": 2 00:11:43.617 } 00:11:43.617 ], 00:11:43.617 "driver_specific": {} 00:11:43.617 } 00:11:43.617 ] 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.617 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.877 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.877 "name": "Existed_Raid", 00:11:43.877 "uuid": "06dcda46-88af-4658-a1d1-8ff7aab85642", 00:11:43.877 "strip_size_kb": 0, 00:11:43.877 "state": "online", 00:11:43.877 "raid_level": "raid1", 00:11:43.877 "superblock": true, 00:11:43.877 "num_base_bdevs": 3, 00:11:43.877 "num_base_bdevs_discovered": 3, 00:11:43.877 "num_base_bdevs_operational": 3, 00:11:43.877 "base_bdevs_list": [ 00:11:43.877 { 00:11:43.877 "name": "BaseBdev1", 00:11:43.877 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:43.877 "is_configured": true, 00:11:43.877 "data_offset": 2048, 00:11:43.877 "data_size": 63488 00:11:43.877 }, 00:11:43.877 { 00:11:43.877 "name": "BaseBdev2", 00:11:43.877 "uuid": "12a51761-4c44-4d2a-bea2-42cccb37caed", 00:11:43.877 "is_configured": true, 00:11:43.877 "data_offset": 2048, 00:11:43.877 "data_size": 63488 00:11:43.877 }, 00:11:43.877 { 00:11:43.877 "name": "BaseBdev3", 00:11:43.877 "uuid": "980bb172-ad5d-415f-b2c6-9739a649c343", 00:11:43.877 "is_configured": true, 00:11:43.877 "data_offset": 2048, 00:11:43.877 "data_size": 63488 00:11:43.877 } 00:11:43.877 ] 00:11:43.877 }' 00:11:43.877 06:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.877 06:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:44.446 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:44.706 [2024-08-13 06:06:46.244476] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.706 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:44.706 "name": "Existed_Raid", 00:11:44.706 "aliases": [ 00:11:44.706 "06dcda46-88af-4658-a1d1-8ff7aab85642" 00:11:44.706 ], 00:11:44.706 "product_name": "Raid Volume", 00:11:44.706 "block_size": 512, 00:11:44.706 "num_blocks": 63488, 00:11:44.706 "uuid": "06dcda46-88af-4658-a1d1-8ff7aab85642", 00:11:44.706 "assigned_rate_limits": { 00:11:44.706 "rw_ios_per_sec": 0, 00:11:44.706 "rw_mbytes_per_sec": 0, 00:11:44.706 "r_mbytes_per_sec": 0, 00:11:44.706 "w_mbytes_per_sec": 0 00:11:44.706 }, 00:11:44.706 "claimed": false, 00:11:44.706 "zoned": false, 00:11:44.706 "supported_io_types": { 00:11:44.706 "read": true, 00:11:44.706 "write": true, 00:11:44.706 "unmap": false, 00:11:44.706 "flush": false, 00:11:44.706 "reset": true, 00:11:44.706 "nvme_admin": false, 00:11:44.706 "nvme_io": false, 00:11:44.706 "nvme_io_md": false, 00:11:44.706 "write_zeroes": true, 00:11:44.706 "zcopy": false, 00:11:44.706 "get_zone_info": false, 00:11:44.706 "zone_management": false, 00:11:44.706 "zone_append": false, 00:11:44.706 "compare": false, 00:11:44.706 "compare_and_write": false, 00:11:44.706 "abort": false, 00:11:44.706 "seek_hole": false, 00:11:44.706 "seek_data": false, 00:11:44.706 "copy": false, 00:11:44.706 "nvme_iov_md": false 00:11:44.706 }, 00:11:44.706 "memory_domains": [ 00:11:44.706 { 00:11:44.706 "dma_device_id": "system", 00:11:44.706 "dma_device_type": 1 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.706 "dma_device_type": 2 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "dma_device_id": "system", 00:11:44.706 "dma_device_type": 1 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.706 "dma_device_type": 2 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "dma_device_id": "system", 00:11:44.706 "dma_device_type": 1 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.706 "dma_device_type": 2 00:11:44.706 } 00:11:44.706 ], 00:11:44.706 "driver_specific": { 00:11:44.706 "raid": { 00:11:44.706 "uuid": "06dcda46-88af-4658-a1d1-8ff7aab85642", 00:11:44.706 "strip_size_kb": 0, 00:11:44.706 "state": "online", 00:11:44.706 "raid_level": "raid1", 00:11:44.706 "superblock": true, 00:11:44.706 "num_base_bdevs": 3, 00:11:44.706 "num_base_bdevs_discovered": 3, 00:11:44.706 "num_base_bdevs_operational": 3, 00:11:44.706 "base_bdevs_list": [ 00:11:44.706 { 00:11:44.706 "name": "BaseBdev1", 00:11:44.706 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:44.706 "is_configured": true, 00:11:44.706 "data_offset": 2048, 00:11:44.706 "data_size": 63488 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "name": "BaseBdev2", 00:11:44.706 "uuid": "12a51761-4c44-4d2a-bea2-42cccb37caed", 00:11:44.706 "is_configured": true, 00:11:44.706 "data_offset": 2048, 00:11:44.706 "data_size": 63488 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "name": "BaseBdev3", 00:11:44.706 "uuid": "980bb172-ad5d-415f-b2c6-9739a649c343", 00:11:44.706 "is_configured": true, 00:11:44.706 "data_offset": 2048, 00:11:44.706 "data_size": 63488 00:11:44.706 } 00:11:44.706 ] 00:11:44.706 } 00:11:44.706 } 00:11:44.706 }' 00:11:44.706 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.706 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:44.706 BaseBdev2 00:11:44.706 BaseBdev3' 00:11:44.706 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:44.706 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:44.706 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.966 "name": "BaseBdev1", 00:11:44.966 "aliases": [ 00:11:44.966 "50ff5795-f722-40be-99cf-86da7e52172d" 00:11:44.966 ], 00:11:44.966 "product_name": "Malloc disk", 00:11:44.966 "block_size": 512, 00:11:44.966 "num_blocks": 65536, 00:11:44.966 "uuid": "50ff5795-f722-40be-99cf-86da7e52172d", 00:11:44.966 "assigned_rate_limits": { 00:11:44.966 "rw_ios_per_sec": 0, 00:11:44.966 "rw_mbytes_per_sec": 0, 00:11:44.966 "r_mbytes_per_sec": 0, 00:11:44.966 "w_mbytes_per_sec": 0 00:11:44.966 }, 00:11:44.966 "claimed": true, 00:11:44.966 "claim_type": "exclusive_write", 00:11:44.966 "zoned": false, 00:11:44.966 "supported_io_types": { 00:11:44.966 "read": true, 00:11:44.966 "write": true, 00:11:44.966 "unmap": true, 00:11:44.966 "flush": true, 00:11:44.966 "reset": true, 00:11:44.966 "nvme_admin": false, 00:11:44.966 "nvme_io": false, 00:11:44.966 "nvme_io_md": false, 00:11:44.966 "write_zeroes": true, 00:11:44.966 "zcopy": true, 00:11:44.966 "get_zone_info": false, 00:11:44.966 "zone_management": false, 00:11:44.966 "zone_append": false, 00:11:44.966 "compare": false, 00:11:44.966 "compare_and_write": false, 00:11:44.966 "abort": true, 00:11:44.966 "seek_hole": false, 00:11:44.966 "seek_data": false, 00:11:44.966 "copy": true, 00:11:44.966 "nvme_iov_md": false 00:11:44.966 }, 00:11:44.966 "memory_domains": [ 00:11:44.966 { 00:11:44.966 "dma_device_id": "system", 00:11:44.966 "dma_device_type": 1 00:11:44.966 }, 00:11:44.966 { 00:11:44.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.966 "dma_device_type": 2 00:11:44.966 } 00:11:44.966 ], 00:11:44.966 "driver_specific": {} 00:11:44.966 }' 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.966 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:45.226 06:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:45.485 "name": "BaseBdev2", 00:11:45.485 "aliases": [ 00:11:45.485 "12a51761-4c44-4d2a-bea2-42cccb37caed" 00:11:45.485 ], 00:11:45.485 "product_name": "Malloc disk", 00:11:45.485 "block_size": 512, 00:11:45.485 "num_blocks": 65536, 00:11:45.485 "uuid": "12a51761-4c44-4d2a-bea2-42cccb37caed", 00:11:45.485 "assigned_rate_limits": { 00:11:45.485 "rw_ios_per_sec": 0, 00:11:45.485 "rw_mbytes_per_sec": 0, 00:11:45.485 "r_mbytes_per_sec": 0, 00:11:45.485 "w_mbytes_per_sec": 0 00:11:45.485 }, 00:11:45.485 "claimed": true, 00:11:45.485 "claim_type": "exclusive_write", 00:11:45.485 "zoned": false, 00:11:45.485 "supported_io_types": { 00:11:45.485 "read": true, 00:11:45.485 "write": true, 00:11:45.485 "unmap": true, 00:11:45.485 "flush": true, 00:11:45.485 "reset": true, 00:11:45.485 "nvme_admin": false, 00:11:45.485 "nvme_io": false, 00:11:45.485 "nvme_io_md": false, 00:11:45.485 "write_zeroes": true, 00:11:45.485 "zcopy": true, 00:11:45.485 "get_zone_info": false, 00:11:45.485 "zone_management": false, 00:11:45.485 "zone_append": false, 00:11:45.485 "compare": false, 00:11:45.485 "compare_and_write": false, 00:11:45.485 "abort": true, 00:11:45.485 "seek_hole": false, 00:11:45.485 "seek_data": false, 00:11:45.485 "copy": true, 00:11:45.485 "nvme_iov_md": false 00:11:45.485 }, 00:11:45.485 "memory_domains": [ 00:11:45.485 { 00:11:45.485 "dma_device_id": "system", 00:11:45.485 "dma_device_type": 1 00:11:45.485 }, 00:11:45.485 { 00:11:45.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.485 "dma_device_type": 2 00:11:45.485 } 00:11:45.485 ], 00:11:45.485 "driver_specific": {} 00:11:45.485 }' 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:45.485 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.745 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:46.005 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.005 "name": "BaseBdev3", 00:11:46.005 "aliases": [ 00:11:46.005 "980bb172-ad5d-415f-b2c6-9739a649c343" 00:11:46.005 ], 00:11:46.005 "product_name": "Malloc disk", 00:11:46.005 "block_size": 512, 00:11:46.005 "num_blocks": 65536, 00:11:46.005 "uuid": "980bb172-ad5d-415f-b2c6-9739a649c343", 00:11:46.005 "assigned_rate_limits": { 00:11:46.005 "rw_ios_per_sec": 0, 00:11:46.005 "rw_mbytes_per_sec": 0, 00:11:46.005 "r_mbytes_per_sec": 0, 00:11:46.005 "w_mbytes_per_sec": 0 00:11:46.005 }, 00:11:46.005 "claimed": true, 00:11:46.005 "claim_type": "exclusive_write", 00:11:46.005 "zoned": false, 00:11:46.005 "supported_io_types": { 00:11:46.005 "read": true, 00:11:46.005 "write": true, 00:11:46.005 "unmap": true, 00:11:46.005 "flush": true, 00:11:46.005 "reset": true, 00:11:46.005 "nvme_admin": false, 00:11:46.005 "nvme_io": false, 00:11:46.005 "nvme_io_md": false, 00:11:46.005 "write_zeroes": true, 00:11:46.005 "zcopy": true, 00:11:46.005 "get_zone_info": false, 00:11:46.005 "zone_management": false, 00:11:46.005 "zone_append": false, 00:11:46.005 "compare": false, 00:11:46.005 "compare_and_write": false, 00:11:46.005 "abort": true, 00:11:46.005 "seek_hole": false, 00:11:46.005 "seek_data": false, 00:11:46.005 "copy": true, 00:11:46.005 "nvme_iov_md": false 00:11:46.005 }, 00:11:46.005 "memory_domains": [ 00:11:46.005 { 00:11:46.005 "dma_device_id": "system", 00:11:46.005 "dma_device_type": 1 00:11:46.005 }, 00:11:46.005 { 00:11:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.005 "dma_device_type": 2 00:11:46.005 } 00:11:46.005 ], 00:11:46.005 "driver_specific": {} 00:11:46.005 }' 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.006 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.265 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.265 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.265 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.265 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.265 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.265 06:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:46.525 [2024-08-13 06:06:48.173068] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.525 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.784 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.784 "name": "Existed_Raid", 00:11:46.784 "uuid": "06dcda46-88af-4658-a1d1-8ff7aab85642", 00:11:46.784 "strip_size_kb": 0, 00:11:46.784 "state": "online", 00:11:46.784 "raid_level": "raid1", 00:11:46.784 "superblock": true, 00:11:46.784 "num_base_bdevs": 3, 00:11:46.784 "num_base_bdevs_discovered": 2, 00:11:46.784 "num_base_bdevs_operational": 2, 00:11:46.784 "base_bdevs_list": [ 00:11:46.784 { 00:11:46.784 "name": null, 00:11:46.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.784 "is_configured": false, 00:11:46.784 "data_offset": 2048, 00:11:46.784 "data_size": 63488 00:11:46.784 }, 00:11:46.784 { 00:11:46.784 "name": "BaseBdev2", 00:11:46.784 "uuid": "12a51761-4c44-4d2a-bea2-42cccb37caed", 00:11:46.784 "is_configured": true, 00:11:46.784 "data_offset": 2048, 00:11:46.784 "data_size": 63488 00:11:46.784 }, 00:11:46.784 { 00:11:46.784 "name": "BaseBdev3", 00:11:46.784 "uuid": "980bb172-ad5d-415f-b2c6-9739a649c343", 00:11:46.784 "is_configured": true, 00:11:46.784 "data_offset": 2048, 00:11:46.784 "data_size": 63488 00:11:46.784 } 00:11:46.784 ] 00:11:46.784 }' 00:11:46.784 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.784 06:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.354 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:47.354 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:47.354 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.354 06:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:47.614 [2024-08-13 06:06:49.346614] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.614 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:47.873 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:47.873 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.873 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:48.133 [2024-08-13 06:06:49.757145] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.133 [2024-08-13 06:06:49.757337] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.133 [2024-08-13 06:06:49.768513] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.133 [2024-08-13 06:06:49.768566] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.133 [2024-08-13 06:06:49.768581] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:48.133 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:48.133 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:48.133 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.133 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.391 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:48.391 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:48.391 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:48.391 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:48.391 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:48.391 06:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.391 BaseBdev2 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:48.651 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.911 [ 00:11:48.911 { 00:11:48.911 "name": "BaseBdev2", 00:11:48.911 "aliases": [ 00:11:48.911 "feaca38e-8481-4577-af55-6aea4e22da06" 00:11:48.911 ], 00:11:48.911 "product_name": "Malloc disk", 00:11:48.911 "block_size": 512, 00:11:48.911 "num_blocks": 65536, 00:11:48.911 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:48.911 "assigned_rate_limits": { 00:11:48.911 "rw_ios_per_sec": 0, 00:11:48.911 "rw_mbytes_per_sec": 0, 00:11:48.911 "r_mbytes_per_sec": 0, 00:11:48.911 "w_mbytes_per_sec": 0 00:11:48.911 }, 00:11:48.911 "claimed": false, 00:11:48.911 "zoned": false, 00:11:48.911 "supported_io_types": { 00:11:48.911 "read": true, 00:11:48.911 "write": true, 00:11:48.911 "unmap": true, 00:11:48.911 "flush": true, 00:11:48.911 "reset": true, 00:11:48.911 "nvme_admin": false, 00:11:48.911 "nvme_io": false, 00:11:48.911 "nvme_io_md": false, 00:11:48.911 "write_zeroes": true, 00:11:48.911 "zcopy": true, 00:11:48.911 "get_zone_info": false, 00:11:48.911 "zone_management": false, 00:11:48.911 "zone_append": false, 00:11:48.911 "compare": false, 00:11:48.911 "compare_and_write": false, 00:11:48.911 "abort": true, 00:11:48.911 "seek_hole": false, 00:11:48.911 "seek_data": false, 00:11:48.911 "copy": true, 00:11:48.911 "nvme_iov_md": false 00:11:48.911 }, 00:11:48.911 "memory_domains": [ 00:11:48.911 { 00:11:48.911 "dma_device_id": "system", 00:11:48.911 "dma_device_type": 1 00:11:48.911 }, 00:11:48.911 { 00:11:48.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.911 "dma_device_type": 2 00:11:48.911 } 00:11:48.911 ], 00:11:48.911 "driver_specific": {} 00:11:48.911 } 00:11:48.911 ] 00:11:48.911 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:48.911 06:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:48.911 06:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:48.911 06:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.171 BaseBdev3 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:49.171 06:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.431 [ 00:11:49.431 { 00:11:49.431 "name": "BaseBdev3", 00:11:49.431 "aliases": [ 00:11:49.431 "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a" 00:11:49.431 ], 00:11:49.431 "product_name": "Malloc disk", 00:11:49.431 "block_size": 512, 00:11:49.431 "num_blocks": 65536, 00:11:49.431 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:49.431 "assigned_rate_limits": { 00:11:49.431 "rw_ios_per_sec": 0, 00:11:49.431 "rw_mbytes_per_sec": 0, 00:11:49.431 "r_mbytes_per_sec": 0, 00:11:49.431 "w_mbytes_per_sec": 0 00:11:49.431 }, 00:11:49.431 "claimed": false, 00:11:49.431 "zoned": false, 00:11:49.431 "supported_io_types": { 00:11:49.431 "read": true, 00:11:49.431 "write": true, 00:11:49.431 "unmap": true, 00:11:49.431 "flush": true, 00:11:49.431 "reset": true, 00:11:49.431 "nvme_admin": false, 00:11:49.431 "nvme_io": false, 00:11:49.431 "nvme_io_md": false, 00:11:49.431 "write_zeroes": true, 00:11:49.431 "zcopy": true, 00:11:49.431 "get_zone_info": false, 00:11:49.431 "zone_management": false, 00:11:49.431 "zone_append": false, 00:11:49.431 "compare": false, 00:11:49.431 "compare_and_write": false, 00:11:49.431 "abort": true, 00:11:49.431 "seek_hole": false, 00:11:49.431 "seek_data": false, 00:11:49.431 "copy": true, 00:11:49.431 "nvme_iov_md": false 00:11:49.431 }, 00:11:49.431 "memory_domains": [ 00:11:49.431 { 00:11:49.431 "dma_device_id": "system", 00:11:49.431 "dma_device_type": 1 00:11:49.431 }, 00:11:49.431 { 00:11:49.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.431 "dma_device_type": 2 00:11:49.431 } 00:11:49.431 ], 00:11:49.431 "driver_specific": {} 00:11:49.431 } 00:11:49.431 ] 00:11:49.431 06:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:49.431 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:49.431 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:49.431 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:49.691 [2024-08-13 06:06:51.295412] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.691 [2024-08-13 06:06:51.295548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.691 [2024-08-13 06:06:51.295582] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.691 [2024-08-13 06:06:51.297572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.691 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.951 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.951 "name": "Existed_Raid", 00:11:49.951 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:49.951 "strip_size_kb": 0, 00:11:49.951 "state": "configuring", 00:11:49.951 "raid_level": "raid1", 00:11:49.951 "superblock": true, 00:11:49.951 "num_base_bdevs": 3, 00:11:49.951 "num_base_bdevs_discovered": 2, 00:11:49.951 "num_base_bdevs_operational": 3, 00:11:49.951 "base_bdevs_list": [ 00:11:49.951 { 00:11:49.951 "name": "BaseBdev1", 00:11:49.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.951 "is_configured": false, 00:11:49.951 "data_offset": 0, 00:11:49.951 "data_size": 0 00:11:49.951 }, 00:11:49.951 { 00:11:49.951 "name": "BaseBdev2", 00:11:49.951 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:49.951 "is_configured": true, 00:11:49.951 "data_offset": 2048, 00:11:49.951 "data_size": 63488 00:11:49.951 }, 00:11:49.951 { 00:11:49.951 "name": "BaseBdev3", 00:11:49.951 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:49.951 "is_configured": true, 00:11:49.951 "data_offset": 2048, 00:11:49.951 "data_size": 63488 00:11:49.951 } 00:11:49.951 ] 00:11:49.951 }' 00:11:49.951 06:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.951 06:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:50.527 [2024-08-13 06:06:52.253803] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.527 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.787 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.787 "name": "Existed_Raid", 00:11:50.787 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:50.787 "strip_size_kb": 0, 00:11:50.787 "state": "configuring", 00:11:50.787 "raid_level": "raid1", 00:11:50.787 "superblock": true, 00:11:50.787 "num_base_bdevs": 3, 00:11:50.787 "num_base_bdevs_discovered": 1, 00:11:50.787 "num_base_bdevs_operational": 3, 00:11:50.787 "base_bdevs_list": [ 00:11:50.787 { 00:11:50.787 "name": "BaseBdev1", 00:11:50.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.787 "is_configured": false, 00:11:50.787 "data_offset": 0, 00:11:50.787 "data_size": 0 00:11:50.787 }, 00:11:50.787 { 00:11:50.787 "name": null, 00:11:50.787 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:50.787 "is_configured": false, 00:11:50.787 "data_offset": 2048, 00:11:50.787 "data_size": 63488 00:11:50.787 }, 00:11:50.787 { 00:11:50.787 "name": "BaseBdev3", 00:11:50.787 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:50.787 "is_configured": true, 00:11:50.787 "data_offset": 2048, 00:11:50.787 "data_size": 63488 00:11:50.787 } 00:11:50.787 ] 00:11:50.787 }' 00:11:50.787 06:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.787 06:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.357 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.357 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.616 [2024-08-13 06:06:53.354941] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.616 BaseBdev1 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:51.616 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.876 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.136 [ 00:11:52.136 { 00:11:52.136 "name": "BaseBdev1", 00:11:52.136 "aliases": [ 00:11:52.136 "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30" 00:11:52.136 ], 00:11:52.136 "product_name": "Malloc disk", 00:11:52.136 "block_size": 512, 00:11:52.136 "num_blocks": 65536, 00:11:52.136 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:52.136 "assigned_rate_limits": { 00:11:52.136 "rw_ios_per_sec": 0, 00:11:52.136 "rw_mbytes_per_sec": 0, 00:11:52.136 "r_mbytes_per_sec": 0, 00:11:52.136 "w_mbytes_per_sec": 0 00:11:52.136 }, 00:11:52.136 "claimed": true, 00:11:52.136 "claim_type": "exclusive_write", 00:11:52.136 "zoned": false, 00:11:52.136 "supported_io_types": { 00:11:52.136 "read": true, 00:11:52.136 "write": true, 00:11:52.136 "unmap": true, 00:11:52.136 "flush": true, 00:11:52.136 "reset": true, 00:11:52.136 "nvme_admin": false, 00:11:52.136 "nvme_io": false, 00:11:52.136 "nvme_io_md": false, 00:11:52.136 "write_zeroes": true, 00:11:52.136 "zcopy": true, 00:11:52.136 "get_zone_info": false, 00:11:52.136 "zone_management": false, 00:11:52.136 "zone_append": false, 00:11:52.136 "compare": false, 00:11:52.136 "compare_and_write": false, 00:11:52.136 "abort": true, 00:11:52.136 "seek_hole": false, 00:11:52.136 "seek_data": false, 00:11:52.136 "copy": true, 00:11:52.136 "nvme_iov_md": false 00:11:52.136 }, 00:11:52.136 "memory_domains": [ 00:11:52.136 { 00:11:52.136 "dma_device_id": "system", 00:11:52.136 "dma_device_type": 1 00:11:52.136 }, 00:11:52.136 { 00:11:52.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.136 "dma_device_type": 2 00:11:52.136 } 00:11:52.136 ], 00:11:52.136 "driver_specific": {} 00:11:52.136 } 00:11:52.136 ] 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.136 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.395 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.395 "name": "Existed_Raid", 00:11:52.395 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:52.395 "strip_size_kb": 0, 00:11:52.395 "state": "configuring", 00:11:52.395 "raid_level": "raid1", 00:11:52.395 "superblock": true, 00:11:52.395 "num_base_bdevs": 3, 00:11:52.395 "num_base_bdevs_discovered": 2, 00:11:52.395 "num_base_bdevs_operational": 3, 00:11:52.395 "base_bdevs_list": [ 00:11:52.395 { 00:11:52.395 "name": "BaseBdev1", 00:11:52.395 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:52.395 "is_configured": true, 00:11:52.395 "data_offset": 2048, 00:11:52.395 "data_size": 63488 00:11:52.395 }, 00:11:52.395 { 00:11:52.395 "name": null, 00:11:52.395 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:52.395 "is_configured": false, 00:11:52.395 "data_offset": 2048, 00:11:52.395 "data_size": 63488 00:11:52.395 }, 00:11:52.395 { 00:11:52.395 "name": "BaseBdev3", 00:11:52.395 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:52.395 "is_configured": true, 00:11:52.395 "data_offset": 2048, 00:11:52.395 "data_size": 63488 00:11:52.395 } 00:11:52.395 ] 00:11:52.395 }' 00:11:52.395 06:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.395 06:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.965 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.965 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.965 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:52.965 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:53.224 [2024-08-13 06:06:54.924430] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.224 06:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.484 06:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.484 "name": "Existed_Raid", 00:11:53.484 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:53.484 "strip_size_kb": 0, 00:11:53.484 "state": "configuring", 00:11:53.484 "raid_level": "raid1", 00:11:53.484 "superblock": true, 00:11:53.484 "num_base_bdevs": 3, 00:11:53.484 "num_base_bdevs_discovered": 1, 00:11:53.484 "num_base_bdevs_operational": 3, 00:11:53.484 "base_bdevs_list": [ 00:11:53.484 { 00:11:53.484 "name": "BaseBdev1", 00:11:53.484 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:53.484 "is_configured": true, 00:11:53.484 "data_offset": 2048, 00:11:53.484 "data_size": 63488 00:11:53.484 }, 00:11:53.484 { 00:11:53.484 "name": null, 00:11:53.484 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:53.484 "is_configured": false, 00:11:53.484 "data_offset": 2048, 00:11:53.484 "data_size": 63488 00:11:53.484 }, 00:11:53.484 { 00:11:53.484 "name": null, 00:11:53.484 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:53.484 "is_configured": false, 00:11:53.484 "data_offset": 2048, 00:11:53.484 "data_size": 63488 00:11:53.484 } 00:11:53.484 ] 00:11:53.484 }' 00:11:53.484 06:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.484 06:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.053 06:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.053 06:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.313 06:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:54.313 06:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:54.573 [2024-08-13 06:06:56.142876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.573 "name": "Existed_Raid", 00:11:54.573 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:54.573 "strip_size_kb": 0, 00:11:54.573 "state": "configuring", 00:11:54.573 "raid_level": "raid1", 00:11:54.573 "superblock": true, 00:11:54.573 "num_base_bdevs": 3, 00:11:54.573 "num_base_bdevs_discovered": 2, 00:11:54.573 "num_base_bdevs_operational": 3, 00:11:54.573 "base_bdevs_list": [ 00:11:54.573 { 00:11:54.573 "name": "BaseBdev1", 00:11:54.573 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:54.573 "is_configured": true, 00:11:54.573 "data_offset": 2048, 00:11:54.573 "data_size": 63488 00:11:54.573 }, 00:11:54.573 { 00:11:54.573 "name": null, 00:11:54.573 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:54.573 "is_configured": false, 00:11:54.573 "data_offset": 2048, 00:11:54.573 "data_size": 63488 00:11:54.573 }, 00:11:54.573 { 00:11:54.573 "name": "BaseBdev3", 00:11:54.573 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:54.573 "is_configured": true, 00:11:54.573 "data_offset": 2048, 00:11:54.573 "data_size": 63488 00:11:54.573 } 00:11:54.573 ] 00:11:54.573 }' 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.573 06:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.526 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.526 06:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:55.526 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:55.526 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:55.784 [2024-08-13 06:06:57.324906] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.784 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.044 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.044 "name": "Existed_Raid", 00:11:56.044 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:56.044 "strip_size_kb": 0, 00:11:56.044 "state": "configuring", 00:11:56.044 "raid_level": "raid1", 00:11:56.044 "superblock": true, 00:11:56.044 "num_base_bdevs": 3, 00:11:56.044 "num_base_bdevs_discovered": 1, 00:11:56.044 "num_base_bdevs_operational": 3, 00:11:56.044 "base_bdevs_list": [ 00:11:56.044 { 00:11:56.044 "name": null, 00:11:56.044 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:56.044 "is_configured": false, 00:11:56.044 "data_offset": 2048, 00:11:56.044 "data_size": 63488 00:11:56.044 }, 00:11:56.044 { 00:11:56.044 "name": null, 00:11:56.044 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:56.044 "is_configured": false, 00:11:56.044 "data_offset": 2048, 00:11:56.044 "data_size": 63488 00:11:56.044 }, 00:11:56.044 { 00:11:56.044 "name": "BaseBdev3", 00:11:56.044 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:56.044 "is_configured": true, 00:11:56.044 "data_offset": 2048, 00:11:56.044 "data_size": 63488 00:11:56.044 } 00:11:56.044 ] 00:11:56.044 }' 00:11:56.044 06:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.044 06:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.613 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.613 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.613 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:56.613 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:56.873 [2024-08-13 06:06:58.557500] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.873 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.132 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.132 "name": "Existed_Raid", 00:11:57.132 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:57.132 "strip_size_kb": 0, 00:11:57.132 "state": "configuring", 00:11:57.132 "raid_level": "raid1", 00:11:57.132 "superblock": true, 00:11:57.132 "num_base_bdevs": 3, 00:11:57.132 "num_base_bdevs_discovered": 2, 00:11:57.132 "num_base_bdevs_operational": 3, 00:11:57.132 "base_bdevs_list": [ 00:11:57.132 { 00:11:57.132 "name": null, 00:11:57.132 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:57.132 "is_configured": false, 00:11:57.132 "data_offset": 2048, 00:11:57.132 "data_size": 63488 00:11:57.132 }, 00:11:57.132 { 00:11:57.132 "name": "BaseBdev2", 00:11:57.133 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:57.133 "is_configured": true, 00:11:57.133 "data_offset": 2048, 00:11:57.133 "data_size": 63488 00:11:57.133 }, 00:11:57.133 { 00:11:57.133 "name": "BaseBdev3", 00:11:57.133 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:57.133 "is_configured": true, 00:11:57.133 "data_offset": 2048, 00:11:57.133 "data_size": 63488 00:11:57.133 } 00:11:57.133 ] 00:11:57.133 }' 00:11:57.133 06:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.133 06:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.702 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.702 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.960 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:57.960 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.960 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30 00:11:58.220 [2024-08-13 06:06:59.962067] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.220 NewBaseBdev 00:11:58.220 [2024-08-13 06:06:59.962333] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:58.220 [2024-08-13 06:06:59.962371] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.220 [2024-08-13 06:06:59.962617] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:58.220 [2024-08-13 06:06:59.962727] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:58.220 [2024-08-13 06:06:59.962735] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:58.220 [2024-08-13 06:06:59.962832] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:58.220 06:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:58.478 06:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:58.738 [ 00:11:58.738 { 00:11:58.738 "name": "NewBaseBdev", 00:11:58.738 "aliases": [ 00:11:58.738 "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30" 00:11:58.738 ], 00:11:58.738 "product_name": "Malloc disk", 00:11:58.738 "block_size": 512, 00:11:58.738 "num_blocks": 65536, 00:11:58.738 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:58.738 "assigned_rate_limits": { 00:11:58.738 "rw_ios_per_sec": 0, 00:11:58.738 "rw_mbytes_per_sec": 0, 00:11:58.738 "r_mbytes_per_sec": 0, 00:11:58.738 "w_mbytes_per_sec": 0 00:11:58.738 }, 00:11:58.738 "claimed": true, 00:11:58.738 "claim_type": "exclusive_write", 00:11:58.738 "zoned": false, 00:11:58.738 "supported_io_types": { 00:11:58.738 "read": true, 00:11:58.738 "write": true, 00:11:58.738 "unmap": true, 00:11:58.738 "flush": true, 00:11:58.738 "reset": true, 00:11:58.738 "nvme_admin": false, 00:11:58.738 "nvme_io": false, 00:11:58.738 "nvme_io_md": false, 00:11:58.738 "write_zeroes": true, 00:11:58.738 "zcopy": true, 00:11:58.738 "get_zone_info": false, 00:11:58.738 "zone_management": false, 00:11:58.738 "zone_append": false, 00:11:58.738 "compare": false, 00:11:58.738 "compare_and_write": false, 00:11:58.738 "abort": true, 00:11:58.738 "seek_hole": false, 00:11:58.738 "seek_data": false, 00:11:58.738 "copy": true, 00:11:58.738 "nvme_iov_md": false 00:11:58.738 }, 00:11:58.738 "memory_domains": [ 00:11:58.738 { 00:11:58.738 "dma_device_id": "system", 00:11:58.738 "dma_device_type": 1 00:11:58.738 }, 00:11:58.738 { 00:11:58.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.738 "dma_device_type": 2 00:11:58.738 } 00:11:58.738 ], 00:11:58.738 "driver_specific": {} 00:11:58.738 } 00:11:58.738 ] 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.738 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.998 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.998 "name": "Existed_Raid", 00:11:58.998 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:58.998 "strip_size_kb": 0, 00:11:58.998 "state": "online", 00:11:58.998 "raid_level": "raid1", 00:11:58.998 "superblock": true, 00:11:58.998 "num_base_bdevs": 3, 00:11:58.998 "num_base_bdevs_discovered": 3, 00:11:58.998 "num_base_bdevs_operational": 3, 00:11:58.998 "base_bdevs_list": [ 00:11:58.998 { 00:11:58.998 "name": "NewBaseBdev", 00:11:58.998 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:58.998 "is_configured": true, 00:11:58.998 "data_offset": 2048, 00:11:58.998 "data_size": 63488 00:11:58.999 }, 00:11:58.999 { 00:11:58.999 "name": "BaseBdev2", 00:11:58.999 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:58.999 "is_configured": true, 00:11:58.999 "data_offset": 2048, 00:11:58.999 "data_size": 63488 00:11:58.999 }, 00:11:58.999 { 00:11:58.999 "name": "BaseBdev3", 00:11:58.999 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:58.999 "is_configured": true, 00:11:58.999 "data_offset": 2048, 00:11:58.999 "data_size": 63488 00:11:58.999 } 00:11:58.999 ] 00:11:58.999 }' 00:11:58.999 06:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.999 06:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:59.568 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:59.568 [2024-08-13 06:07:01.326808] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.828 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:59.828 "name": "Existed_Raid", 00:11:59.828 "aliases": [ 00:11:59.828 "3bec8f86-c2fd-4e35-99dd-8e9092a3b941" 00:11:59.828 ], 00:11:59.828 "product_name": "Raid Volume", 00:11:59.828 "block_size": 512, 00:11:59.828 "num_blocks": 63488, 00:11:59.828 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:59.828 "assigned_rate_limits": { 00:11:59.828 "rw_ios_per_sec": 0, 00:11:59.828 "rw_mbytes_per_sec": 0, 00:11:59.828 "r_mbytes_per_sec": 0, 00:11:59.828 "w_mbytes_per_sec": 0 00:11:59.828 }, 00:11:59.828 "claimed": false, 00:11:59.828 "zoned": false, 00:11:59.828 "supported_io_types": { 00:11:59.828 "read": true, 00:11:59.828 "write": true, 00:11:59.828 "unmap": false, 00:11:59.828 "flush": false, 00:11:59.828 "reset": true, 00:11:59.828 "nvme_admin": false, 00:11:59.828 "nvme_io": false, 00:11:59.828 "nvme_io_md": false, 00:11:59.828 "write_zeroes": true, 00:11:59.828 "zcopy": false, 00:11:59.828 "get_zone_info": false, 00:11:59.828 "zone_management": false, 00:11:59.828 "zone_append": false, 00:11:59.828 "compare": false, 00:11:59.828 "compare_and_write": false, 00:11:59.828 "abort": false, 00:11:59.828 "seek_hole": false, 00:11:59.828 "seek_data": false, 00:11:59.828 "copy": false, 00:11:59.828 "nvme_iov_md": false 00:11:59.828 }, 00:11:59.828 "memory_domains": [ 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "system", 00:11:59.828 "dma_device_type": 1 00:11:59.828 }, 00:11:59.828 { 00:11:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.828 "dma_device_type": 2 00:11:59.828 } 00:11:59.828 ], 00:11:59.828 "driver_specific": { 00:11:59.828 "raid": { 00:11:59.828 "uuid": "3bec8f86-c2fd-4e35-99dd-8e9092a3b941", 00:11:59.828 "strip_size_kb": 0, 00:11:59.829 "state": "online", 00:11:59.829 "raid_level": "raid1", 00:11:59.829 "superblock": true, 00:11:59.829 "num_base_bdevs": 3, 00:11:59.829 "num_base_bdevs_discovered": 3, 00:11:59.829 "num_base_bdevs_operational": 3, 00:11:59.829 "base_bdevs_list": [ 00:11:59.829 { 00:11:59.829 "name": "NewBaseBdev", 00:11:59.829 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:11:59.829 "is_configured": true, 00:11:59.829 "data_offset": 2048, 00:11:59.829 "data_size": 63488 00:11:59.829 }, 00:11:59.829 { 00:11:59.829 "name": "BaseBdev2", 00:11:59.829 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:11:59.829 "is_configured": true, 00:11:59.829 "data_offset": 2048, 00:11:59.829 "data_size": 63488 00:11:59.829 }, 00:11:59.829 { 00:11:59.829 "name": "BaseBdev3", 00:11:59.829 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:11:59.829 "is_configured": true, 00:11:59.829 "data_offset": 2048, 00:11:59.829 "data_size": 63488 00:11:59.829 } 00:11:59.829 ] 00:11:59.829 } 00:11:59.829 } 00:11:59.829 }' 00:11:59.829 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.829 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:59.829 BaseBdev2 00:11:59.829 BaseBdev3' 00:11:59.829 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:59.829 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:59.829 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:00.089 "name": "NewBaseBdev", 00:12:00.089 "aliases": [ 00:12:00.089 "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30" 00:12:00.089 ], 00:12:00.089 "product_name": "Malloc disk", 00:12:00.089 "block_size": 512, 00:12:00.089 "num_blocks": 65536, 00:12:00.089 "uuid": "98998ae7-db31-4ee0-a1b7-e6eb1a0e1d30", 00:12:00.089 "assigned_rate_limits": { 00:12:00.089 "rw_ios_per_sec": 0, 00:12:00.089 "rw_mbytes_per_sec": 0, 00:12:00.089 "r_mbytes_per_sec": 0, 00:12:00.089 "w_mbytes_per_sec": 0 00:12:00.089 }, 00:12:00.089 "claimed": true, 00:12:00.089 "claim_type": "exclusive_write", 00:12:00.089 "zoned": false, 00:12:00.089 "supported_io_types": { 00:12:00.089 "read": true, 00:12:00.089 "write": true, 00:12:00.089 "unmap": true, 00:12:00.089 "flush": true, 00:12:00.089 "reset": true, 00:12:00.089 "nvme_admin": false, 00:12:00.089 "nvme_io": false, 00:12:00.089 "nvme_io_md": false, 00:12:00.089 "write_zeroes": true, 00:12:00.089 "zcopy": true, 00:12:00.089 "get_zone_info": false, 00:12:00.089 "zone_management": false, 00:12:00.089 "zone_append": false, 00:12:00.089 "compare": false, 00:12:00.089 "compare_and_write": false, 00:12:00.089 "abort": true, 00:12:00.089 "seek_hole": false, 00:12:00.089 "seek_data": false, 00:12:00.089 "copy": true, 00:12:00.089 "nvme_iov_md": false 00:12:00.089 }, 00:12:00.089 "memory_domains": [ 00:12:00.089 { 00:12:00.089 "dma_device_id": "system", 00:12:00.089 "dma_device_type": 1 00:12:00.089 }, 00:12:00.089 { 00:12:00.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.089 "dma_device_type": 2 00:12:00.089 } 00:12:00.089 ], 00:12:00.089 "driver_specific": {} 00:12:00.089 }' 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.089 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.349 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:00.349 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:00.349 06:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:00.349 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:00.349 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:00.349 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:00.349 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:00.609 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:00.609 "name": "BaseBdev2", 00:12:00.609 "aliases": [ 00:12:00.609 "feaca38e-8481-4577-af55-6aea4e22da06" 00:12:00.609 ], 00:12:00.609 "product_name": "Malloc disk", 00:12:00.609 "block_size": 512, 00:12:00.609 "num_blocks": 65536, 00:12:00.609 "uuid": "feaca38e-8481-4577-af55-6aea4e22da06", 00:12:00.609 "assigned_rate_limits": { 00:12:00.609 "rw_ios_per_sec": 0, 00:12:00.609 "rw_mbytes_per_sec": 0, 00:12:00.609 "r_mbytes_per_sec": 0, 00:12:00.609 "w_mbytes_per_sec": 0 00:12:00.609 }, 00:12:00.609 "claimed": true, 00:12:00.609 "claim_type": "exclusive_write", 00:12:00.609 "zoned": false, 00:12:00.609 "supported_io_types": { 00:12:00.609 "read": true, 00:12:00.609 "write": true, 00:12:00.609 "unmap": true, 00:12:00.609 "flush": true, 00:12:00.609 "reset": true, 00:12:00.609 "nvme_admin": false, 00:12:00.609 "nvme_io": false, 00:12:00.609 "nvme_io_md": false, 00:12:00.609 "write_zeroes": true, 00:12:00.609 "zcopy": true, 00:12:00.609 "get_zone_info": false, 00:12:00.609 "zone_management": false, 00:12:00.609 "zone_append": false, 00:12:00.609 "compare": false, 00:12:00.609 "compare_and_write": false, 00:12:00.609 "abort": true, 00:12:00.609 "seek_hole": false, 00:12:00.609 "seek_data": false, 00:12:00.609 "copy": true, 00:12:00.609 "nvme_iov_md": false 00:12:00.609 }, 00:12:00.609 "memory_domains": [ 00:12:00.609 { 00:12:00.609 "dma_device_id": "system", 00:12:00.609 "dma_device_type": 1 00:12:00.609 }, 00:12:00.609 { 00:12:00.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.609 "dma_device_type": 2 00:12:00.609 } 00:12:00.609 ], 00:12:00.609 "driver_specific": {} 00:12:00.609 }' 00:12:00.609 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.609 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.609 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:00.609 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.610 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:00.869 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:01.129 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:01.129 "name": "BaseBdev3", 00:12:01.129 "aliases": [ 00:12:01.129 "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a" 00:12:01.129 ], 00:12:01.129 "product_name": "Malloc disk", 00:12:01.129 "block_size": 512, 00:12:01.129 "num_blocks": 65536, 00:12:01.129 "uuid": "0e56be9e-f8f2-4be4-a9a6-a77b01436e1a", 00:12:01.129 "assigned_rate_limits": { 00:12:01.129 "rw_ios_per_sec": 0, 00:12:01.129 "rw_mbytes_per_sec": 0, 00:12:01.129 "r_mbytes_per_sec": 0, 00:12:01.129 "w_mbytes_per_sec": 0 00:12:01.129 }, 00:12:01.129 "claimed": true, 00:12:01.129 "claim_type": "exclusive_write", 00:12:01.129 "zoned": false, 00:12:01.129 "supported_io_types": { 00:12:01.129 "read": true, 00:12:01.129 "write": true, 00:12:01.129 "unmap": true, 00:12:01.129 "flush": true, 00:12:01.129 "reset": true, 00:12:01.129 "nvme_admin": false, 00:12:01.129 "nvme_io": false, 00:12:01.129 "nvme_io_md": false, 00:12:01.129 "write_zeroes": true, 00:12:01.129 "zcopy": true, 00:12:01.129 "get_zone_info": false, 00:12:01.129 "zone_management": false, 00:12:01.129 "zone_append": false, 00:12:01.129 "compare": false, 00:12:01.129 "compare_and_write": false, 00:12:01.129 "abort": true, 00:12:01.129 "seek_hole": false, 00:12:01.129 "seek_data": false, 00:12:01.129 "copy": true, 00:12:01.129 "nvme_iov_md": false 00:12:01.129 }, 00:12:01.129 "memory_domains": [ 00:12:01.129 { 00:12:01.129 "dma_device_id": "system", 00:12:01.129 "dma_device_type": 1 00:12:01.129 }, 00:12:01.129 { 00:12:01.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.129 "dma_device_type": 2 00:12:01.129 } 00:12:01.129 ], 00:12:01.129 "driver_specific": {} 00:12:01.129 }' 00:12:01.129 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.129 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.129 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:01.129 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.391 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.391 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:01.391 06:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:01.391 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:01.391 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.391 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.391 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.391 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.391 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:01.650 [2024-08-13 06:07:03.347148] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.650 [2024-08-13 06:07:03.347249] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.650 [2024-08-13 06:07:03.347385] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.651 [2024-08-13 06:07:03.347692] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.651 [2024-08-13 06:07:03.347746] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 81229 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 81229 ']' 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 81229 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81229 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81229' 00:12:01.651 killing process with pid 81229 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 81229 00:12:01.651 [2024-08-13 06:07:03.403847] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.651 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 81229 00:12:01.910 [2024-08-13 06:07:03.461023] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.169 06:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:02.169 ************************************ 00:12:02.169 END TEST raid_state_function_test_sb 00:12:02.169 ************************************ 00:12:02.169 00:12:02.169 real 0m24.992s 00:12:02.169 user 0m46.221s 00:12:02.169 sys 0m3.896s 00:12:02.169 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.169 06:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.169 06:07:03 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:02.169 06:07:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:02.169 06:07:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.169 06:07:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.169 ************************************ 00:12:02.169 START TEST raid_superblock_test 00:12:02.169 ************************************ 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=82128 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 82128 /var/tmp/spdk-raid.sock 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:02.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 82128 ']' 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.169 06:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.429 [2024-08-13 06:07:04.001009] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:12:02.429 [2024-08-13 06:07:04.001156] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82128 ] 00:12:02.429 [2024-08-13 06:07:04.146623] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.429 [2024-08-13 06:07:04.192713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.689 [2024-08-13 06:07:04.236159] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.689 [2024-08-13 06:07:04.236195] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:03.258 malloc1 00:12:03.258 06:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.518 [2024-08-13 06:07:05.153020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.518 [2024-08-13 06:07:05.153159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.518 [2024-08-13 06:07:05.153204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:03.518 [2024-08-13 06:07:05.153236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.518 [2024-08-13 06:07:05.155246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.518 [2024-08-13 06:07:05.155313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.518 pt1 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.518 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:03.777 malloc2 00:12:03.777 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.777 [2024-08-13 06:07:05.549224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.777 [2024-08-13 06:07:05.549323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.777 [2024-08-13 06:07:05.549359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.777 [2024-08-13 06:07:05.549386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.777 [2024-08-13 06:07:05.551376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.777 [2024-08-13 06:07:05.551440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.777 pt2 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:04.036 malloc3 00:12:04.036 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.295 [2024-08-13 06:07:05.963779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.295 [2024-08-13 06:07:05.963842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.295 [2024-08-13 06:07:05.963864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:04.295 [2024-08-13 06:07:05.963872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.295 [2024-08-13 06:07:05.965939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.295 [2024-08-13 06:07:05.965987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.295 pt3 00:12:04.295 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:04.295 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:04.295 06:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:04.555 [2024-08-13 06:07:06.171481] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.555 [2024-08-13 06:07:06.173464] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.555 [2024-08-13 06:07:06.173573] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.555 [2024-08-13 06:07:06.173774] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:04.555 [2024-08-13 06:07:06.173826] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.555 [2024-08-13 06:07:06.174154] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:04.555 [2024-08-13 06:07:06.174334] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:04.555 [2024-08-13 06:07:06.174376] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:04.555 [2024-08-13 06:07:06.174568] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.555 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.815 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.815 "name": "raid_bdev1", 00:12:04.815 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:04.815 "strip_size_kb": 0, 00:12:04.815 "state": "online", 00:12:04.815 "raid_level": "raid1", 00:12:04.815 "superblock": true, 00:12:04.815 "num_base_bdevs": 3, 00:12:04.815 "num_base_bdevs_discovered": 3, 00:12:04.815 "num_base_bdevs_operational": 3, 00:12:04.815 "base_bdevs_list": [ 00:12:04.815 { 00:12:04.815 "name": "pt1", 00:12:04.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.815 "is_configured": true, 00:12:04.815 "data_offset": 2048, 00:12:04.815 "data_size": 63488 00:12:04.815 }, 00:12:04.815 { 00:12:04.815 "name": "pt2", 00:12:04.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.815 "is_configured": true, 00:12:04.815 "data_offset": 2048, 00:12:04.815 "data_size": 63488 00:12:04.815 }, 00:12:04.815 { 00:12:04.815 "name": "pt3", 00:12:04.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.815 "is_configured": true, 00:12:04.815 "data_offset": 2048, 00:12:04.815 "data_size": 63488 00:12:04.815 } 00:12:04.815 ] 00:12:04.815 }' 00:12:04.815 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.815 06:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.383 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.383 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:05.383 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:05.383 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:05.383 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:05.383 06:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:05.383 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:05.383 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:05.643 [2024-08-13 06:07:07.181973] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.643 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:05.643 "name": "raid_bdev1", 00:12:05.643 "aliases": [ 00:12:05.643 "3ad05069-3716-4990-8409-0daa2bedab7d" 00:12:05.643 ], 00:12:05.643 "product_name": "Raid Volume", 00:12:05.643 "block_size": 512, 00:12:05.643 "num_blocks": 63488, 00:12:05.643 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:05.643 "assigned_rate_limits": { 00:12:05.643 "rw_ios_per_sec": 0, 00:12:05.643 "rw_mbytes_per_sec": 0, 00:12:05.643 "r_mbytes_per_sec": 0, 00:12:05.643 "w_mbytes_per_sec": 0 00:12:05.643 }, 00:12:05.643 "claimed": false, 00:12:05.643 "zoned": false, 00:12:05.643 "supported_io_types": { 00:12:05.643 "read": true, 00:12:05.643 "write": true, 00:12:05.643 "unmap": false, 00:12:05.643 "flush": false, 00:12:05.643 "reset": true, 00:12:05.643 "nvme_admin": false, 00:12:05.643 "nvme_io": false, 00:12:05.643 "nvme_io_md": false, 00:12:05.643 "write_zeroes": true, 00:12:05.643 "zcopy": false, 00:12:05.643 "get_zone_info": false, 00:12:05.643 "zone_management": false, 00:12:05.643 "zone_append": false, 00:12:05.643 "compare": false, 00:12:05.643 "compare_and_write": false, 00:12:05.643 "abort": false, 00:12:05.643 "seek_hole": false, 00:12:05.643 "seek_data": false, 00:12:05.643 "copy": false, 00:12:05.643 "nvme_iov_md": false 00:12:05.643 }, 00:12:05.643 "memory_domains": [ 00:12:05.643 { 00:12:05.643 "dma_device_id": "system", 00:12:05.643 "dma_device_type": 1 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.643 "dma_device_type": 2 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "dma_device_id": "system", 00:12:05.643 "dma_device_type": 1 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.643 "dma_device_type": 2 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "dma_device_id": "system", 00:12:05.643 "dma_device_type": 1 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.643 "dma_device_type": 2 00:12:05.643 } 00:12:05.643 ], 00:12:05.643 "driver_specific": { 00:12:05.643 "raid": { 00:12:05.643 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:05.643 "strip_size_kb": 0, 00:12:05.643 "state": "online", 00:12:05.643 "raid_level": "raid1", 00:12:05.643 "superblock": true, 00:12:05.643 "num_base_bdevs": 3, 00:12:05.643 "num_base_bdevs_discovered": 3, 00:12:05.643 "num_base_bdevs_operational": 3, 00:12:05.643 "base_bdevs_list": [ 00:12:05.643 { 00:12:05.643 "name": "pt1", 00:12:05.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.643 "is_configured": true, 00:12:05.643 "data_offset": 2048, 00:12:05.643 "data_size": 63488 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "name": "pt2", 00:12:05.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.643 "is_configured": true, 00:12:05.643 "data_offset": 2048, 00:12:05.643 "data_size": 63488 00:12:05.643 }, 00:12:05.643 { 00:12:05.643 "name": "pt3", 00:12:05.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.643 "is_configured": true, 00:12:05.643 "data_offset": 2048, 00:12:05.643 "data_size": 63488 00:12:05.643 } 00:12:05.643 ] 00:12:05.643 } 00:12:05.643 } 00:12:05.643 }' 00:12:05.643 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.643 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:05.643 pt2 00:12:05.643 pt3' 00:12:05.643 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:05.643 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:05.643 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:05.903 "name": "pt1", 00:12:05.903 "aliases": [ 00:12:05.903 "00000000-0000-0000-0000-000000000001" 00:12:05.903 ], 00:12:05.903 "product_name": "passthru", 00:12:05.903 "block_size": 512, 00:12:05.903 "num_blocks": 65536, 00:12:05.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.903 "assigned_rate_limits": { 00:12:05.903 "rw_ios_per_sec": 0, 00:12:05.903 "rw_mbytes_per_sec": 0, 00:12:05.903 "r_mbytes_per_sec": 0, 00:12:05.903 "w_mbytes_per_sec": 0 00:12:05.903 }, 00:12:05.903 "claimed": true, 00:12:05.903 "claim_type": "exclusive_write", 00:12:05.903 "zoned": false, 00:12:05.903 "supported_io_types": { 00:12:05.903 "read": true, 00:12:05.903 "write": true, 00:12:05.903 "unmap": true, 00:12:05.903 "flush": true, 00:12:05.903 "reset": true, 00:12:05.903 "nvme_admin": false, 00:12:05.903 "nvme_io": false, 00:12:05.903 "nvme_io_md": false, 00:12:05.903 "write_zeroes": true, 00:12:05.903 "zcopy": true, 00:12:05.903 "get_zone_info": false, 00:12:05.903 "zone_management": false, 00:12:05.903 "zone_append": false, 00:12:05.903 "compare": false, 00:12:05.903 "compare_and_write": false, 00:12:05.903 "abort": true, 00:12:05.903 "seek_hole": false, 00:12:05.903 "seek_data": false, 00:12:05.903 "copy": true, 00:12:05.903 "nvme_iov_md": false 00:12:05.903 }, 00:12:05.903 "memory_domains": [ 00:12:05.903 { 00:12:05.903 "dma_device_id": "system", 00:12:05.903 "dma_device_type": 1 00:12:05.903 }, 00:12:05.903 { 00:12:05.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.903 "dma_device_type": 2 00:12:05.903 } 00:12:05.903 ], 00:12:05.903 "driver_specific": { 00:12:05.903 "passthru": { 00:12:05.903 "name": "pt1", 00:12:05.903 "base_bdev_name": "malloc1" 00:12:05.903 } 00:12:05.903 } 00:12:05.903 }' 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:05.903 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:06.162 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.421 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.421 "name": "pt2", 00:12:06.421 "aliases": [ 00:12:06.421 "00000000-0000-0000-0000-000000000002" 00:12:06.421 ], 00:12:06.421 "product_name": "passthru", 00:12:06.421 "block_size": 512, 00:12:06.421 "num_blocks": 65536, 00:12:06.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.421 "assigned_rate_limits": { 00:12:06.421 "rw_ios_per_sec": 0, 00:12:06.421 "rw_mbytes_per_sec": 0, 00:12:06.421 "r_mbytes_per_sec": 0, 00:12:06.421 "w_mbytes_per_sec": 0 00:12:06.421 }, 00:12:06.421 "claimed": true, 00:12:06.421 "claim_type": "exclusive_write", 00:12:06.421 "zoned": false, 00:12:06.421 "supported_io_types": { 00:12:06.421 "read": true, 00:12:06.421 "write": true, 00:12:06.421 "unmap": true, 00:12:06.421 "flush": true, 00:12:06.421 "reset": true, 00:12:06.421 "nvme_admin": false, 00:12:06.421 "nvme_io": false, 00:12:06.421 "nvme_io_md": false, 00:12:06.421 "write_zeroes": true, 00:12:06.421 "zcopy": true, 00:12:06.421 "get_zone_info": false, 00:12:06.421 "zone_management": false, 00:12:06.421 "zone_append": false, 00:12:06.421 "compare": false, 00:12:06.421 "compare_and_write": false, 00:12:06.421 "abort": true, 00:12:06.421 "seek_hole": false, 00:12:06.421 "seek_data": false, 00:12:06.421 "copy": true, 00:12:06.421 "nvme_iov_md": false 00:12:06.421 }, 00:12:06.421 "memory_domains": [ 00:12:06.421 { 00:12:06.421 "dma_device_id": "system", 00:12:06.421 "dma_device_type": 1 00:12:06.421 }, 00:12:06.421 { 00:12:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.421 "dma_device_type": 2 00:12:06.421 } 00:12:06.421 ], 00:12:06.421 "driver_specific": { 00:12:06.421 "passthru": { 00:12:06.421 "name": "pt2", 00:12:06.421 "base_bdev_name": "malloc2" 00:12:06.421 } 00:12:06.421 } 00:12:06.421 }' 00:12:06.421 06:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.421 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.421 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.421 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.421 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.421 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.421 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:06.680 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.939 "name": "pt3", 00:12:06.939 "aliases": [ 00:12:06.939 "00000000-0000-0000-0000-000000000003" 00:12:06.939 ], 00:12:06.939 "product_name": "passthru", 00:12:06.939 "block_size": 512, 00:12:06.939 "num_blocks": 65536, 00:12:06.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.939 "assigned_rate_limits": { 00:12:06.939 "rw_ios_per_sec": 0, 00:12:06.939 "rw_mbytes_per_sec": 0, 00:12:06.939 "r_mbytes_per_sec": 0, 00:12:06.939 "w_mbytes_per_sec": 0 00:12:06.939 }, 00:12:06.939 "claimed": true, 00:12:06.939 "claim_type": "exclusive_write", 00:12:06.939 "zoned": false, 00:12:06.939 "supported_io_types": { 00:12:06.939 "read": true, 00:12:06.939 "write": true, 00:12:06.939 "unmap": true, 00:12:06.939 "flush": true, 00:12:06.939 "reset": true, 00:12:06.939 "nvme_admin": false, 00:12:06.939 "nvme_io": false, 00:12:06.939 "nvme_io_md": false, 00:12:06.939 "write_zeroes": true, 00:12:06.939 "zcopy": true, 00:12:06.939 "get_zone_info": false, 00:12:06.939 "zone_management": false, 00:12:06.939 "zone_append": false, 00:12:06.939 "compare": false, 00:12:06.939 "compare_and_write": false, 00:12:06.939 "abort": true, 00:12:06.939 "seek_hole": false, 00:12:06.939 "seek_data": false, 00:12:06.939 "copy": true, 00:12:06.939 "nvme_iov_md": false 00:12:06.939 }, 00:12:06.939 "memory_domains": [ 00:12:06.939 { 00:12:06.939 "dma_device_id": "system", 00:12:06.939 "dma_device_type": 1 00:12:06.939 }, 00:12:06.939 { 00:12:06.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.939 "dma_device_type": 2 00:12:06.939 } 00:12:06.939 ], 00:12:06.939 "driver_specific": { 00:12:06.939 "passthru": { 00:12:06.939 "name": "pt3", 00:12:06.939 "base_bdev_name": "malloc3" 00:12:06.939 } 00:12:06.939 } 00:12:06.939 }' 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.939 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:07.199 06:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:12:07.458 [2024-08-13 06:07:09.034793] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.458 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=3ad05069-3716-4990-8409-0daa2bedab7d 00:12:07.458 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 3ad05069-3716-4990-8409-0daa2bedab7d ']' 00:12:07.458 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:07.458 [2024-08-13 06:07:09.230223] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.458 [2024-08-13 06:07:09.230306] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.458 [2024-08-13 06:07:09.230399] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.458 [2024-08-13 06:07:09.230490] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.458 [2024-08-13 06:07:09.230500] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:07.717 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:12:07.717 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.717 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:12:07.717 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:12:07.717 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.717 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:07.976 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.976 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:08.236 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.236 06:07:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:08.236 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:08.236 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:08.495 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:08.754 [2024-08-13 06:07:10.424190] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:08.754 [2024-08-13 06:07:10.425996] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:08.754 [2024-08-13 06:07:10.426116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:08.754 [2024-08-13 06:07:10.426187] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:08.754 [2024-08-13 06:07:10.426303] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:08.754 [2024-08-13 06:07:10.426381] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:08.754 [2024-08-13 06:07:10.426440] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.754 [2024-08-13 06:07:10.426477] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:12:08.754 request: 00:12:08.754 { 00:12:08.754 "name": "raid_bdev1", 00:12:08.754 "raid_level": "raid1", 00:12:08.754 "base_bdevs": [ 00:12:08.754 "malloc1", 00:12:08.754 "malloc2", 00:12:08.754 "malloc3" 00:12:08.754 ], 00:12:08.754 "superblock": false, 00:12:08.754 "method": "bdev_raid_create", 00:12:08.754 "req_id": 1 00:12:08.754 } 00:12:08.754 Got JSON-RPC error response 00:12:08.754 response: 00:12:08.754 { 00:12:08.754 "code": -17, 00:12:08.754 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:08.754 } 00:12:08.754 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:12:08.754 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:12:08.754 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:12:08.754 06:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:12:08.754 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.754 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:12:09.014 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:12:09.014 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:12:09.014 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:09.273 [2024-08-13 06:07:10.835438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:09.273 [2024-08-13 06:07:10.835500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.273 [2024-08-13 06:07:10.835520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:09.273 [2024-08-13 06:07:10.835528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.273 [2024-08-13 06:07:10.837590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.273 [2024-08-13 06:07:10.837687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:09.273 [2024-08-13 06:07:10.837771] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:09.273 [2024-08-13 06:07:10.837831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:09.273 pt1 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.273 06:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.273 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.273 "name": "raid_bdev1", 00:12:09.273 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:09.273 "strip_size_kb": 0, 00:12:09.273 "state": "configuring", 00:12:09.273 "raid_level": "raid1", 00:12:09.273 "superblock": true, 00:12:09.273 "num_base_bdevs": 3, 00:12:09.273 "num_base_bdevs_discovered": 1, 00:12:09.273 "num_base_bdevs_operational": 3, 00:12:09.273 "base_bdevs_list": [ 00:12:09.273 { 00:12:09.273 "name": "pt1", 00:12:09.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.273 "is_configured": true, 00:12:09.273 "data_offset": 2048, 00:12:09.273 "data_size": 63488 00:12:09.273 }, 00:12:09.273 { 00:12:09.273 "name": null, 00:12:09.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.273 "is_configured": false, 00:12:09.273 "data_offset": 2048, 00:12:09.273 "data_size": 63488 00:12:09.273 }, 00:12:09.273 { 00:12:09.273 "name": null, 00:12:09.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.273 "is_configured": false, 00:12:09.273 "data_offset": 2048, 00:12:09.273 "data_size": 63488 00:12:09.273 } 00:12:09.273 ] 00:12:09.273 }' 00:12:09.273 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.273 06:07:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.841 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:12:09.841 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.105 [2024-08-13 06:07:11.717968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.105 [2024-08-13 06:07:11.718104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.105 [2024-08-13 06:07:11.718153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:10.105 [2024-08-13 06:07:11.718182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.105 [2024-08-13 06:07:11.718606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.105 [2024-08-13 06:07:11.718666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.105 [2024-08-13 06:07:11.718771] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:10.105 [2024-08-13 06:07:11.718820] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.105 pt2 00:12:10.105 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:10.364 [2024-08-13 06:07:11.921643] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.364 06:07:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.364 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.364 "name": "raid_bdev1", 00:12:10.364 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:10.364 "strip_size_kb": 0, 00:12:10.364 "state": "configuring", 00:12:10.364 "raid_level": "raid1", 00:12:10.364 "superblock": true, 00:12:10.364 "num_base_bdevs": 3, 00:12:10.365 "num_base_bdevs_discovered": 1, 00:12:10.365 "num_base_bdevs_operational": 3, 00:12:10.365 "base_bdevs_list": [ 00:12:10.365 { 00:12:10.365 "name": "pt1", 00:12:10.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.365 "is_configured": true, 00:12:10.365 "data_offset": 2048, 00:12:10.365 "data_size": 63488 00:12:10.365 }, 00:12:10.365 { 00:12:10.365 "name": null, 00:12:10.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.365 "is_configured": false, 00:12:10.365 "data_offset": 2048, 00:12:10.365 "data_size": 63488 00:12:10.365 }, 00:12:10.365 { 00:12:10.365 "name": null, 00:12:10.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.365 "is_configured": false, 00:12:10.365 "data_offset": 2048, 00:12:10.365 "data_size": 63488 00:12:10.365 } 00:12:10.365 ] 00:12:10.365 }' 00:12:10.365 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.365 06:07:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.932 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:12:10.932 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:10.932 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:11.191 [2024-08-13 06:07:12.896142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:11.191 [2024-08-13 06:07:12.896203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.191 [2024-08-13 06:07:12.896221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:11.191 [2024-08-13 06:07:12.896231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.191 [2024-08-13 06:07:12.896608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.191 [2024-08-13 06:07:12.896627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:11.191 [2024-08-13 06:07:12.896695] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:11.191 [2024-08-13 06:07:12.896718] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:11.191 pt2 00:12:11.191 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:11.191 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:11.191 06:07:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:11.450 [2024-08-13 06:07:13.103795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:11.450 [2024-08-13 06:07:13.103863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.450 [2024-08-13 06:07:13.103879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:11.450 [2024-08-13 06:07:13.103893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.450 [2024-08-13 06:07:13.104293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.450 [2024-08-13 06:07:13.104315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:11.450 [2024-08-13 06:07:13.104388] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:11.450 [2024-08-13 06:07:13.104412] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:11.450 [2024-08-13 06:07:13.104515] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:11.450 [2024-08-13 06:07:13.104536] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.450 [2024-08-13 06:07:13.104836] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:12:11.450 [2024-08-13 06:07:13.104968] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:11.450 [2024-08-13 06:07:13.104978] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:12:11.450 [2024-08-13 06:07:13.105097] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.450 pt3 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:11.450 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.451 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.451 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.451 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.451 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.451 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.709 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.709 "name": "raid_bdev1", 00:12:11.709 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:11.709 "strip_size_kb": 0, 00:12:11.709 "state": "online", 00:12:11.709 "raid_level": "raid1", 00:12:11.709 "superblock": true, 00:12:11.709 "num_base_bdevs": 3, 00:12:11.709 "num_base_bdevs_discovered": 3, 00:12:11.709 "num_base_bdevs_operational": 3, 00:12:11.709 "base_bdevs_list": [ 00:12:11.709 { 00:12:11.709 "name": "pt1", 00:12:11.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:11.709 "is_configured": true, 00:12:11.709 "data_offset": 2048, 00:12:11.709 "data_size": 63488 00:12:11.709 }, 00:12:11.709 { 00:12:11.709 "name": "pt2", 00:12:11.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.709 "is_configured": true, 00:12:11.709 "data_offset": 2048, 00:12:11.709 "data_size": 63488 00:12:11.709 }, 00:12:11.709 { 00:12:11.709 "name": "pt3", 00:12:11.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.709 "is_configured": true, 00:12:11.709 "data_offset": 2048, 00:12:11.709 "data_size": 63488 00:12:11.709 } 00:12:11.709 ] 00:12:11.709 }' 00:12:11.709 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.709 06:07:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:12.276 06:07:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:12.276 [2024-08-13 06:07:14.062393] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.534 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:12.534 "name": "raid_bdev1", 00:12:12.534 "aliases": [ 00:12:12.534 "3ad05069-3716-4990-8409-0daa2bedab7d" 00:12:12.534 ], 00:12:12.534 "product_name": "Raid Volume", 00:12:12.534 "block_size": 512, 00:12:12.534 "num_blocks": 63488, 00:12:12.534 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:12.534 "assigned_rate_limits": { 00:12:12.534 "rw_ios_per_sec": 0, 00:12:12.534 "rw_mbytes_per_sec": 0, 00:12:12.534 "r_mbytes_per_sec": 0, 00:12:12.534 "w_mbytes_per_sec": 0 00:12:12.534 }, 00:12:12.534 "claimed": false, 00:12:12.534 "zoned": false, 00:12:12.534 "supported_io_types": { 00:12:12.534 "read": true, 00:12:12.534 "write": true, 00:12:12.534 "unmap": false, 00:12:12.534 "flush": false, 00:12:12.534 "reset": true, 00:12:12.534 "nvme_admin": false, 00:12:12.534 "nvme_io": false, 00:12:12.534 "nvme_io_md": false, 00:12:12.534 "write_zeroes": true, 00:12:12.534 "zcopy": false, 00:12:12.534 "get_zone_info": false, 00:12:12.534 "zone_management": false, 00:12:12.534 "zone_append": false, 00:12:12.534 "compare": false, 00:12:12.534 "compare_and_write": false, 00:12:12.534 "abort": false, 00:12:12.534 "seek_hole": false, 00:12:12.534 "seek_data": false, 00:12:12.534 "copy": false, 00:12:12.534 "nvme_iov_md": false 00:12:12.534 }, 00:12:12.534 "memory_domains": [ 00:12:12.534 { 00:12:12.534 "dma_device_id": "system", 00:12:12.534 "dma_device_type": 1 00:12:12.534 }, 00:12:12.534 { 00:12:12.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.534 "dma_device_type": 2 00:12:12.534 }, 00:12:12.534 { 00:12:12.534 "dma_device_id": "system", 00:12:12.534 "dma_device_type": 1 00:12:12.534 }, 00:12:12.534 { 00:12:12.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.534 "dma_device_type": 2 00:12:12.534 }, 00:12:12.534 { 00:12:12.534 "dma_device_id": "system", 00:12:12.534 "dma_device_type": 1 00:12:12.534 }, 00:12:12.534 { 00:12:12.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.534 "dma_device_type": 2 00:12:12.534 } 00:12:12.534 ], 00:12:12.534 "driver_specific": { 00:12:12.534 "raid": { 00:12:12.534 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:12.534 "strip_size_kb": 0, 00:12:12.534 "state": "online", 00:12:12.534 "raid_level": "raid1", 00:12:12.534 "superblock": true, 00:12:12.534 "num_base_bdevs": 3, 00:12:12.535 "num_base_bdevs_discovered": 3, 00:12:12.535 "num_base_bdevs_operational": 3, 00:12:12.535 "base_bdevs_list": [ 00:12:12.535 { 00:12:12.535 "name": "pt1", 00:12:12.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.535 "is_configured": true, 00:12:12.535 "data_offset": 2048, 00:12:12.535 "data_size": 63488 00:12:12.535 }, 00:12:12.535 { 00:12:12.535 "name": "pt2", 00:12:12.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.535 "is_configured": true, 00:12:12.535 "data_offset": 2048, 00:12:12.535 "data_size": 63488 00:12:12.535 }, 00:12:12.535 { 00:12:12.535 "name": "pt3", 00:12:12.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.535 "is_configured": true, 00:12:12.535 "data_offset": 2048, 00:12:12.535 "data_size": 63488 00:12:12.535 } 00:12:12.535 ] 00:12:12.535 } 00:12:12.535 } 00:12:12.535 }' 00:12:12.535 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.535 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:12.535 pt2 00:12:12.535 pt3' 00:12:12.535 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:12.535 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:12.535 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:12.793 "name": "pt1", 00:12:12.793 "aliases": [ 00:12:12.793 "00000000-0000-0000-0000-000000000001" 00:12:12.793 ], 00:12:12.793 "product_name": "passthru", 00:12:12.793 "block_size": 512, 00:12:12.793 "num_blocks": 65536, 00:12:12.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.793 "assigned_rate_limits": { 00:12:12.793 "rw_ios_per_sec": 0, 00:12:12.793 "rw_mbytes_per_sec": 0, 00:12:12.793 "r_mbytes_per_sec": 0, 00:12:12.793 "w_mbytes_per_sec": 0 00:12:12.793 }, 00:12:12.793 "claimed": true, 00:12:12.793 "claim_type": "exclusive_write", 00:12:12.793 "zoned": false, 00:12:12.793 "supported_io_types": { 00:12:12.793 "read": true, 00:12:12.793 "write": true, 00:12:12.793 "unmap": true, 00:12:12.793 "flush": true, 00:12:12.793 "reset": true, 00:12:12.793 "nvme_admin": false, 00:12:12.793 "nvme_io": false, 00:12:12.793 "nvme_io_md": false, 00:12:12.793 "write_zeroes": true, 00:12:12.793 "zcopy": true, 00:12:12.793 "get_zone_info": false, 00:12:12.793 "zone_management": false, 00:12:12.793 "zone_append": false, 00:12:12.793 "compare": false, 00:12:12.793 "compare_and_write": false, 00:12:12.793 "abort": true, 00:12:12.793 "seek_hole": false, 00:12:12.793 "seek_data": false, 00:12:12.793 "copy": true, 00:12:12.793 "nvme_iov_md": false 00:12:12.793 }, 00:12:12.793 "memory_domains": [ 00:12:12.793 { 00:12:12.793 "dma_device_id": "system", 00:12:12.793 "dma_device_type": 1 00:12:12.793 }, 00:12:12.793 { 00:12:12.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.793 "dma_device_type": 2 00:12:12.793 } 00:12:12.793 ], 00:12:12.793 "driver_specific": { 00:12:12.793 "passthru": { 00:12:12.793 "name": "pt1", 00:12:12.793 "base_bdev_name": "malloc1" 00:12:12.793 } 00:12:12.793 } 00:12:12.793 }' 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.793 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:13.053 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:13.311 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:13.311 "name": "pt2", 00:12:13.311 "aliases": [ 00:12:13.311 "00000000-0000-0000-0000-000000000002" 00:12:13.311 ], 00:12:13.311 "product_name": "passthru", 00:12:13.311 "block_size": 512, 00:12:13.311 "num_blocks": 65536, 00:12:13.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.311 "assigned_rate_limits": { 00:12:13.311 "rw_ios_per_sec": 0, 00:12:13.311 "rw_mbytes_per_sec": 0, 00:12:13.311 "r_mbytes_per_sec": 0, 00:12:13.311 "w_mbytes_per_sec": 0 00:12:13.311 }, 00:12:13.311 "claimed": true, 00:12:13.311 "claim_type": "exclusive_write", 00:12:13.311 "zoned": false, 00:12:13.311 "supported_io_types": { 00:12:13.311 "read": true, 00:12:13.311 "write": true, 00:12:13.311 "unmap": true, 00:12:13.311 "flush": true, 00:12:13.311 "reset": true, 00:12:13.311 "nvme_admin": false, 00:12:13.311 "nvme_io": false, 00:12:13.311 "nvme_io_md": false, 00:12:13.311 "write_zeroes": true, 00:12:13.311 "zcopy": true, 00:12:13.311 "get_zone_info": false, 00:12:13.311 "zone_management": false, 00:12:13.311 "zone_append": false, 00:12:13.311 "compare": false, 00:12:13.311 "compare_and_write": false, 00:12:13.311 "abort": true, 00:12:13.311 "seek_hole": false, 00:12:13.311 "seek_data": false, 00:12:13.311 "copy": true, 00:12:13.311 "nvme_iov_md": false 00:12:13.311 }, 00:12:13.311 "memory_domains": [ 00:12:13.311 { 00:12:13.311 "dma_device_id": "system", 00:12:13.311 "dma_device_type": 1 00:12:13.311 }, 00:12:13.311 { 00:12:13.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.311 "dma_device_type": 2 00:12:13.311 } 00:12:13.311 ], 00:12:13.311 "driver_specific": { 00:12:13.311 "passthru": { 00:12:13.311 "name": "pt2", 00:12:13.311 "base_bdev_name": "malloc2" 00:12:13.311 } 00:12:13.311 } 00:12:13.311 }' 00:12:13.311 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.311 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.311 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:13.311 06:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.311 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.311 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:13.311 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:13.570 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:13.829 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:13.829 "name": "pt3", 00:12:13.829 "aliases": [ 00:12:13.829 "00000000-0000-0000-0000-000000000003" 00:12:13.829 ], 00:12:13.829 "product_name": "passthru", 00:12:13.829 "block_size": 512, 00:12:13.829 "num_blocks": 65536, 00:12:13.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.829 "assigned_rate_limits": { 00:12:13.829 "rw_ios_per_sec": 0, 00:12:13.829 "rw_mbytes_per_sec": 0, 00:12:13.829 "r_mbytes_per_sec": 0, 00:12:13.829 "w_mbytes_per_sec": 0 00:12:13.829 }, 00:12:13.829 "claimed": true, 00:12:13.829 "claim_type": "exclusive_write", 00:12:13.829 "zoned": false, 00:12:13.829 "supported_io_types": { 00:12:13.829 "read": true, 00:12:13.829 "write": true, 00:12:13.829 "unmap": true, 00:12:13.829 "flush": true, 00:12:13.829 "reset": true, 00:12:13.829 "nvme_admin": false, 00:12:13.829 "nvme_io": false, 00:12:13.829 "nvme_io_md": false, 00:12:13.829 "write_zeroes": true, 00:12:13.829 "zcopy": true, 00:12:13.829 "get_zone_info": false, 00:12:13.829 "zone_management": false, 00:12:13.829 "zone_append": false, 00:12:13.829 "compare": false, 00:12:13.829 "compare_and_write": false, 00:12:13.829 "abort": true, 00:12:13.829 "seek_hole": false, 00:12:13.829 "seek_data": false, 00:12:13.829 "copy": true, 00:12:13.829 "nvme_iov_md": false 00:12:13.829 }, 00:12:13.829 "memory_domains": [ 00:12:13.829 { 00:12:13.829 "dma_device_id": "system", 00:12:13.829 "dma_device_type": 1 00:12:13.829 }, 00:12:13.829 { 00:12:13.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.829 "dma_device_type": 2 00:12:13.829 } 00:12:13.829 ], 00:12:13.829 "driver_specific": { 00:12:13.829 "passthru": { 00:12:13.829 "name": "pt3", 00:12:13.829 "base_bdev_name": "malloc3" 00:12:13.829 } 00:12:13.829 } 00:12:13.829 }' 00:12:13.829 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.829 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.829 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:13.829 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.829 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:14.101 06:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:12:14.387 [2024-08-13 06:07:16.003207] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.387 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 3ad05069-3716-4990-8409-0daa2bedab7d '!=' 3ad05069-3716-4990-8409-0daa2bedab7d ']' 00:12:14.387 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:12:14.388 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:14.388 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:14.388 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:14.669 [2024-08-13 06:07:16.198657] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.669 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:14.669 "name": "raid_bdev1", 00:12:14.669 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:14.669 "strip_size_kb": 0, 00:12:14.669 "state": "online", 00:12:14.669 "raid_level": "raid1", 00:12:14.669 "superblock": true, 00:12:14.669 "num_base_bdevs": 3, 00:12:14.670 "num_base_bdevs_discovered": 2, 00:12:14.670 "num_base_bdevs_operational": 2, 00:12:14.670 "base_bdevs_list": [ 00:12:14.670 { 00:12:14.670 "name": null, 00:12:14.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.670 "is_configured": false, 00:12:14.670 "data_offset": 2048, 00:12:14.670 "data_size": 63488 00:12:14.670 }, 00:12:14.670 { 00:12:14.670 "name": "pt2", 00:12:14.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.670 "is_configured": true, 00:12:14.670 "data_offset": 2048, 00:12:14.670 "data_size": 63488 00:12:14.670 }, 00:12:14.670 { 00:12:14.670 "name": "pt3", 00:12:14.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.670 "is_configured": true, 00:12:14.670 "data_offset": 2048, 00:12:14.670 "data_size": 63488 00:12:14.670 } 00:12:14.670 ] 00:12:14.670 }' 00:12:14.670 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:14.670 06:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.255 06:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:15.513 [2024-08-13 06:07:17.140999] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.513 [2024-08-13 06:07:17.141118] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.513 [2024-08-13 06:07:17.141235] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.513 [2024-08-13 06:07:17.141306] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.513 [2024-08-13 06:07:17.141367] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:12:15.513 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:12:15.513 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:12:15.772 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:16.031 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:12:16.031 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:12:16.031 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:12:16.031 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:12:16.031 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:16.290 [2024-08-13 06:07:17.931573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:16.290 [2024-08-13 06:07:17.931697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.290 [2024-08-13 06:07:17.931741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:16.290 [2024-08-13 06:07:17.931775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.290 [2024-08-13 06:07:17.933846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.290 [2024-08-13 06:07:17.933922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:16.290 [2024-08-13 06:07:17.934011] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:16.290 [2024-08-13 06:07:17.934096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:16.290 pt2 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.290 06:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.548 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.548 "name": "raid_bdev1", 00:12:16.549 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:16.549 "strip_size_kb": 0, 00:12:16.549 "state": "configuring", 00:12:16.549 "raid_level": "raid1", 00:12:16.549 "superblock": true, 00:12:16.549 "num_base_bdevs": 3, 00:12:16.549 "num_base_bdevs_discovered": 1, 00:12:16.549 "num_base_bdevs_operational": 2, 00:12:16.549 "base_bdevs_list": [ 00:12:16.549 { 00:12:16.549 "name": null, 00:12:16.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.549 "is_configured": false, 00:12:16.549 "data_offset": 2048, 00:12:16.549 "data_size": 63488 00:12:16.549 }, 00:12:16.549 { 00:12:16.549 "name": "pt2", 00:12:16.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.549 "is_configured": true, 00:12:16.549 "data_offset": 2048, 00:12:16.549 "data_size": 63488 00:12:16.549 }, 00:12:16.549 { 00:12:16.549 "name": null, 00:12:16.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.549 "is_configured": false, 00:12:16.549 "data_offset": 2048, 00:12:16.549 "data_size": 63488 00:12:16.549 } 00:12:16.549 ] 00:12:16.549 }' 00:12:16.549 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.549 06:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.116 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:12:17.116 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.117 [2024-08-13 06:07:18.882051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.117 [2024-08-13 06:07:18.882129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.117 [2024-08-13 06:07:18.882148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:17.117 [2024-08-13 06:07:18.882159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.117 [2024-08-13 06:07:18.882521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.117 [2024-08-13 06:07:18.882543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.117 [2024-08-13 06:07:18.882612] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:17.117 [2024-08-13 06:07:18.882641] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.117 [2024-08-13 06:07:18.882732] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:17.117 [2024-08-13 06:07:18.882758] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.117 [2024-08-13 06:07:18.882970] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:17.117 [2024-08-13 06:07:18.883104] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:17.117 [2024-08-13 06:07:18.883114] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:12:17.117 [2024-08-13 06:07:18.883228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.117 pt3 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.117 06:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.376 06:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.376 "name": "raid_bdev1", 00:12:17.376 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:17.376 "strip_size_kb": 0, 00:12:17.376 "state": "online", 00:12:17.376 "raid_level": "raid1", 00:12:17.376 "superblock": true, 00:12:17.376 "num_base_bdevs": 3, 00:12:17.376 "num_base_bdevs_discovered": 2, 00:12:17.376 "num_base_bdevs_operational": 2, 00:12:17.376 "base_bdevs_list": [ 00:12:17.376 { 00:12:17.376 "name": null, 00:12:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.376 "is_configured": false, 00:12:17.376 "data_offset": 2048, 00:12:17.376 "data_size": 63488 00:12:17.376 }, 00:12:17.376 { 00:12:17.376 "name": "pt2", 00:12:17.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.376 "is_configured": true, 00:12:17.376 "data_offset": 2048, 00:12:17.376 "data_size": 63488 00:12:17.376 }, 00:12:17.376 { 00:12:17.376 "name": "pt3", 00:12:17.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.376 "is_configured": true, 00:12:17.376 "data_offset": 2048, 00:12:17.376 "data_size": 63488 00:12:17.376 } 00:12:17.376 ] 00:12:17.376 }' 00:12:17.376 06:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.376 06:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.942 06:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:18.200 [2024-08-13 06:07:19.788672] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.200 [2024-08-13 06:07:19.788764] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.200 [2024-08-13 06:07:19.788847] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.200 [2024-08-13 06:07:19.788934] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.200 [2024-08-13 06:07:19.789005] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:12:18.200 06:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.200 06:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:12:18.458 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:12:18.458 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:12:18.458 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:12:18.458 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:12:18.458 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:18.458 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:18.717 [2024-08-13 06:07:20.399630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:18.717 [2024-08-13 06:07:20.399745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.717 [2024-08-13 06:07:20.399782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:18.717 [2024-08-13 06:07:20.399807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.717 [2024-08-13 06:07:20.401837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.717 [2024-08-13 06:07:20.401926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:18.717 [2024-08-13 06:07:20.402042] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:18.717 [2024-08-13 06:07:20.402119] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.717 [2024-08-13 06:07:20.402275] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:18.717 [2024-08-13 06:07:20.402328] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.717 [2024-08-13 06:07:20.402367] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:12:18.717 [2024-08-13 06:07:20.402446] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.717 pt1 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.717 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.976 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.976 "name": "raid_bdev1", 00:12:18.976 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:18.976 "strip_size_kb": 0, 00:12:18.976 "state": "configuring", 00:12:18.976 "raid_level": "raid1", 00:12:18.976 "superblock": true, 00:12:18.976 "num_base_bdevs": 3, 00:12:18.976 "num_base_bdevs_discovered": 1, 00:12:18.976 "num_base_bdevs_operational": 2, 00:12:18.976 "base_bdevs_list": [ 00:12:18.976 { 00:12:18.976 "name": null, 00:12:18.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.976 "is_configured": false, 00:12:18.976 "data_offset": 2048, 00:12:18.976 "data_size": 63488 00:12:18.976 }, 00:12:18.976 { 00:12:18.976 "name": "pt2", 00:12:18.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.976 "is_configured": true, 00:12:18.976 "data_offset": 2048, 00:12:18.976 "data_size": 63488 00:12:18.976 }, 00:12:18.976 { 00:12:18.976 "name": null, 00:12:18.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.976 "is_configured": false, 00:12:18.976 "data_offset": 2048, 00:12:18.976 "data_size": 63488 00:12:18.976 } 00:12:18.976 ] 00:12:18.976 }' 00:12:18.976 06:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.976 06:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.543 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:19.543 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:19.802 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:12:19.802 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.802 [2024-08-13 06:07:21.573713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.802 [2024-08-13 06:07:21.573830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.802 [2024-08-13 06:07:21.573867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:19.802 [2024-08-13 06:07:21.573892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.802 [2024-08-13 06:07:21.574323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.802 [2024-08-13 06:07:21.574380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.802 [2024-08-13 06:07:21.574482] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:19.802 [2024-08-13 06:07:21.574529] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.802 [2024-08-13 06:07:21.574644] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:12:19.802 [2024-08-13 06:07:21.574679] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.802 [2024-08-13 06:07:21.574930] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:19.802 [2024-08-13 06:07:21.575093] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:12:19.802 [2024-08-13 06:07:21.575110] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:12:19.802 [2024-08-13 06:07:21.575205] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.802 pt3 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.061 "name": "raid_bdev1", 00:12:20.061 "uuid": "3ad05069-3716-4990-8409-0daa2bedab7d", 00:12:20.061 "strip_size_kb": 0, 00:12:20.061 "state": "online", 00:12:20.061 "raid_level": "raid1", 00:12:20.061 "superblock": true, 00:12:20.061 "num_base_bdevs": 3, 00:12:20.061 "num_base_bdevs_discovered": 2, 00:12:20.061 "num_base_bdevs_operational": 2, 00:12:20.061 "base_bdevs_list": [ 00:12:20.061 { 00:12:20.061 "name": null, 00:12:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.061 "is_configured": false, 00:12:20.061 "data_offset": 2048, 00:12:20.061 "data_size": 63488 00:12:20.061 }, 00:12:20.061 { 00:12:20.061 "name": "pt2", 00:12:20.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.061 "is_configured": true, 00:12:20.061 "data_offset": 2048, 00:12:20.061 "data_size": 63488 00:12:20.061 }, 00:12:20.061 { 00:12:20.061 "name": "pt3", 00:12:20.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.061 "is_configured": true, 00:12:20.061 "data_offset": 2048, 00:12:20.061 "data_size": 63488 00:12:20.061 } 00:12:20.061 ] 00:12:20.061 }' 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.061 06:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.628 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:20.628 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:20.887 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:12:20.887 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:20.887 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:12:21.146 [2024-08-13 06:07:22.736093] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 3ad05069-3716-4990-8409-0daa2bedab7d '!=' 3ad05069-3716-4990-8409-0daa2bedab7d ']' 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 82128 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 82128 ']' 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 82128 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82128 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:21.146 killing process with pid 82128 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82128' 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 82128 00:12:21.146 [2024-08-13 06:07:22.809902] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.146 [2024-08-13 06:07:22.809981] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.146 [2024-08-13 06:07:22.810053] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.146 [2024-08-13 06:07:22.810066] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:12:21.146 06:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 82128 00:12:21.146 [2024-08-13 06:07:22.842675] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.406 06:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:12:21.406 00:12:21.406 real 0m19.178s 00:12:21.406 user 0m35.285s 00:12:21.406 sys 0m3.179s 00:12:21.406 06:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.406 ************************************ 00:12:21.406 END TEST raid_superblock_test 00:12:21.407 ************************************ 00:12:21.407 06:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 06:07:23 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:21.407 06:07:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:21.407 06:07:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.407 06:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 ************************************ 00:12:21.407 START TEST raid_read_error_test 00:12:21.407 ************************************ 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 read 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.iDlaEs3Mys 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=82818 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 82818 /var/tmp/spdk-raid.sock 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 82818 ']' 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:21.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:21.407 06:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.666 [2024-08-13 06:07:23.262642] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:12:21.666 [2024-08-13 06:07:23.262765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82818 ] 00:12:21.666 [2024-08-13 06:07:23.405977] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.666 [2024-08-13 06:07:23.455658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.925 [2024-08-13 06:07:23.498768] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.925 [2024-08-13 06:07:23.498807] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.492 06:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:22.492 06:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:12:22.492 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:22.492 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.492 BaseBdev1_malloc 00:12:22.751 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:22.751 true 00:12:22.751 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:23.009 [2024-08-13 06:07:24.646756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:23.009 [2024-08-13 06:07:24.646818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.009 [2024-08-13 06:07:24.646837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:12:23.009 [2024-08-13 06:07:24.646850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.009 [2024-08-13 06:07:24.648980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.009 [2024-08-13 06:07:24.649021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.009 BaseBdev1 00:12:23.009 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:23.009 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.268 BaseBdev2_malloc 00:12:23.268 06:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:23.526 true 00:12:23.526 06:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.526 [2024-08-13 06:07:25.250180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.526 [2024-08-13 06:07:25.250288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.526 [2024-08-13 06:07:25.250325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:12:23.526 [2024-08-13 06:07:25.250353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.526 [2024-08-13 06:07:25.252343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.526 [2024-08-13 06:07:25.252431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.526 BaseBdev2 00:12:23.526 06:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:23.526 06:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.784 BaseBdev3_malloc 00:12:23.785 06:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:24.043 true 00:12:24.043 06:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:24.302 [2024-08-13 06:07:25.879052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:24.302 [2024-08-13 06:07:25.879106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.302 [2024-08-13 06:07:25.879124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:12:24.302 [2024-08-13 06:07:25.879134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.302 [2024-08-13 06:07:25.881223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.302 [2024-08-13 06:07:25.881265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:24.302 BaseBdev3 00:12:24.302 06:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:24.302 [2024-08-13 06:07:26.078743] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.302 [2024-08-13 06:07:26.080467] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.302 [2024-08-13 06:07:26.080529] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.302 [2024-08-13 06:07:26.080699] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:24.302 [2024-08-13 06:07:26.080710] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.302 [2024-08-13 06:07:26.080963] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:24.302 [2024-08-13 06:07:26.081169] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:24.302 [2024-08-13 06:07:26.081183] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:12:24.302 [2024-08-13 06:07:26.081310] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.561 "name": "raid_bdev1", 00:12:24.561 "uuid": "70db7b21-2f25-4053-9b44-da05d3e75035", 00:12:24.561 "strip_size_kb": 0, 00:12:24.561 "state": "online", 00:12:24.561 "raid_level": "raid1", 00:12:24.561 "superblock": true, 00:12:24.561 "num_base_bdevs": 3, 00:12:24.561 "num_base_bdevs_discovered": 3, 00:12:24.561 "num_base_bdevs_operational": 3, 00:12:24.561 "base_bdevs_list": [ 00:12:24.561 { 00:12:24.561 "name": "BaseBdev1", 00:12:24.561 "uuid": "f55f1d87-edfe-5294-bc96-82c9167a7ee4", 00:12:24.561 "is_configured": true, 00:12:24.561 "data_offset": 2048, 00:12:24.561 "data_size": 63488 00:12:24.561 }, 00:12:24.561 { 00:12:24.561 "name": "BaseBdev2", 00:12:24.561 "uuid": "694f02f7-cb17-5e85-a667-89be981553de", 00:12:24.561 "is_configured": true, 00:12:24.561 "data_offset": 2048, 00:12:24.561 "data_size": 63488 00:12:24.561 }, 00:12:24.561 { 00:12:24.561 "name": "BaseBdev3", 00:12:24.561 "uuid": "4b8b7120-7dcf-5cc8-9012-60f20a965a04", 00:12:24.561 "is_configured": true, 00:12:24.561 "data_offset": 2048, 00:12:24.561 "data_size": 63488 00:12:24.561 } 00:12:24.561 ] 00:12:24.561 }' 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.561 06:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.129 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:25.129 06:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:25.388 [2024-08-13 06:07:26.961507] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:26.325 06:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.325 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.584 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.584 "name": "raid_bdev1", 00:12:26.584 "uuid": "70db7b21-2f25-4053-9b44-da05d3e75035", 00:12:26.584 "strip_size_kb": 0, 00:12:26.584 "state": "online", 00:12:26.584 "raid_level": "raid1", 00:12:26.584 "superblock": true, 00:12:26.584 "num_base_bdevs": 3, 00:12:26.584 "num_base_bdevs_discovered": 3, 00:12:26.584 "num_base_bdevs_operational": 3, 00:12:26.584 "base_bdevs_list": [ 00:12:26.584 { 00:12:26.584 "name": "BaseBdev1", 00:12:26.584 "uuid": "f55f1d87-edfe-5294-bc96-82c9167a7ee4", 00:12:26.584 "is_configured": true, 00:12:26.584 "data_offset": 2048, 00:12:26.584 "data_size": 63488 00:12:26.584 }, 00:12:26.584 { 00:12:26.584 "name": "BaseBdev2", 00:12:26.584 "uuid": "694f02f7-cb17-5e85-a667-89be981553de", 00:12:26.584 "is_configured": true, 00:12:26.584 "data_offset": 2048, 00:12:26.584 "data_size": 63488 00:12:26.584 }, 00:12:26.584 { 00:12:26.584 "name": "BaseBdev3", 00:12:26.584 "uuid": "4b8b7120-7dcf-5cc8-9012-60f20a965a04", 00:12:26.584 "is_configured": true, 00:12:26.584 "data_offset": 2048, 00:12:26.584 "data_size": 63488 00:12:26.584 } 00:12:26.584 ] 00:12:26.584 }' 00:12:26.584 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.584 06:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.153 06:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:27.412 [2024-08-13 06:07:28.996249] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.412 [2024-08-13 06:07:28.996292] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.412 [2024-08-13 06:07:28.998600] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.412 [2024-08-13 06:07:28.998648] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.412 [2024-08-13 06:07:28.998749] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.412 [2024-08-13 06:07:28.998758] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:12:27.412 0 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 82818 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 82818 ']' 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 82818 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82818 00:12:27.412 killing process with pid 82818 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82818' 00:12:27.412 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 82818 00:12:27.413 [2024-08-13 06:07:29.075865] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.413 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 82818 00:12:27.413 [2024-08-13 06:07:29.101063] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.iDlaEs3Mys 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:27.672 00:12:27.672 real 0m6.186s 00:12:27.672 user 0m9.584s 00:12:27.672 sys 0m1.009s 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.672 ************************************ 00:12:27.672 END TEST raid_read_error_test 00:12:27.672 ************************************ 00:12:27.672 06:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.672 06:07:29 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:27.672 06:07:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:27.672 06:07:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.672 06:07:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.672 ************************************ 00:12:27.672 START TEST raid_write_error_test 00:12:27.672 ************************************ 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 write 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:27.672 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.g5mXvw5vG7 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=82992 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 82992 /var/tmp/spdk-raid.sock 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 82992 ']' 00:12:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:27.673 06:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.932 [2024-08-13 06:07:29.540212] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:12:27.932 [2024-08-13 06:07:29.540377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82992 ] 00:12:27.932 [2024-08-13 06:07:29.688490] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.191 [2024-08-13 06:07:29.735496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.191 [2024-08-13 06:07:29.778175] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.191 [2024-08-13 06:07:29.778307] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.760 06:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:28.760 06:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:12:28.760 06:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:28.760 06:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.760 BaseBdev1_malloc 00:12:28.760 06:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:29.018 true 00:12:29.019 06:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:29.277 [2024-08-13 06:07:30.909976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:29.277 [2024-08-13 06:07:30.910126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.277 [2024-08-13 06:07:30.910177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:12:29.277 [2024-08-13 06:07:30.910202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.278 [2024-08-13 06:07:30.912326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.278 [2024-08-13 06:07:30.912367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.278 BaseBdev1 00:12:29.278 06:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:29.278 06:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.537 BaseBdev2_malloc 00:12:29.537 06:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:29.537 true 00:12:29.537 06:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:29.796 [2024-08-13 06:07:31.501695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:29.797 [2024-08-13 06:07:31.501832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.797 [2024-08-13 06:07:31.501872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:12:29.797 [2024-08-13 06:07:31.501909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.797 [2024-08-13 06:07:31.503941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.797 [2024-08-13 06:07:31.504040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.797 BaseBdev2 00:12:29.797 06:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:29.797 06:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:30.056 BaseBdev3_malloc 00:12:30.056 06:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:30.315 true 00:12:30.315 06:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:30.574 [2024-08-13 06:07:32.120627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:30.574 [2024-08-13 06:07:32.120763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.574 [2024-08-13 06:07:32.120806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:12:30.574 [2024-08-13 06:07:32.120837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.574 [2024-08-13 06:07:32.122811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.574 [2024-08-13 06:07:32.122889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:30.574 BaseBdev3 00:12:30.574 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:30.574 [2024-08-13 06:07:32.336331] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.574 [2024-08-13 06:07:32.338087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.574 [2024-08-13 06:07:32.338190] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.574 [2024-08-13 06:07:32.338430] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:30.574 [2024-08-13 06:07:32.338477] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.574 [2024-08-13 06:07:32.338774] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:30.574 [2024-08-13 06:07:32.338950] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:30.574 [2024-08-13 06:07:32.338998] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:12:30.574 [2024-08-13 06:07:32.339181] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.833 "name": "raid_bdev1", 00:12:30.833 "uuid": "67654c30-1212-40a0-b84d-c69399bf91f6", 00:12:30.833 "strip_size_kb": 0, 00:12:30.833 "state": "online", 00:12:30.833 "raid_level": "raid1", 00:12:30.833 "superblock": true, 00:12:30.833 "num_base_bdevs": 3, 00:12:30.833 "num_base_bdevs_discovered": 3, 00:12:30.833 "num_base_bdevs_operational": 3, 00:12:30.833 "base_bdevs_list": [ 00:12:30.833 { 00:12:30.833 "name": "BaseBdev1", 00:12:30.833 "uuid": "c9dbbaa4-b18b-5754-9ee7-27e0e631f3e6", 00:12:30.833 "is_configured": true, 00:12:30.833 "data_offset": 2048, 00:12:30.833 "data_size": 63488 00:12:30.833 }, 00:12:30.833 { 00:12:30.833 "name": "BaseBdev2", 00:12:30.833 "uuid": "893a0e5a-4788-533a-baf2-a2c913395388", 00:12:30.833 "is_configured": true, 00:12:30.833 "data_offset": 2048, 00:12:30.833 "data_size": 63488 00:12:30.833 }, 00:12:30.833 { 00:12:30.833 "name": "BaseBdev3", 00:12:30.833 "uuid": "5dd872bf-ff3c-5591-b08d-2dec6e6a9954", 00:12:30.833 "is_configured": true, 00:12:30.833 "data_offset": 2048, 00:12:30.833 "data_size": 63488 00:12:30.833 } 00:12:30.833 ] 00:12:30.833 }' 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.833 06:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.402 06:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:31.402 06:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:31.661 [2024-08-13 06:07:33.255034] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:32.599 [2024-08-13 06:07:34.339136] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:32.599 [2024-08-13 06:07:34.339321] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.599 [2024-08-13 06:07:34.339601] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.599 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.859 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.859 "name": "raid_bdev1", 00:12:32.859 "uuid": "67654c30-1212-40a0-b84d-c69399bf91f6", 00:12:32.859 "strip_size_kb": 0, 00:12:32.859 "state": "online", 00:12:32.859 "raid_level": "raid1", 00:12:32.859 "superblock": true, 00:12:32.859 "num_base_bdevs": 3, 00:12:32.859 "num_base_bdevs_discovered": 2, 00:12:32.859 "num_base_bdevs_operational": 2, 00:12:32.859 "base_bdevs_list": [ 00:12:32.859 { 00:12:32.859 "name": null, 00:12:32.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.859 "is_configured": false, 00:12:32.859 "data_offset": 2048, 00:12:32.859 "data_size": 63488 00:12:32.859 }, 00:12:32.859 { 00:12:32.859 "name": "BaseBdev2", 00:12:32.859 "uuid": "893a0e5a-4788-533a-baf2-a2c913395388", 00:12:32.859 "is_configured": true, 00:12:32.859 "data_offset": 2048, 00:12:32.859 "data_size": 63488 00:12:32.859 }, 00:12:32.859 { 00:12:32.859 "name": "BaseBdev3", 00:12:32.859 "uuid": "5dd872bf-ff3c-5591-b08d-2dec6e6a9954", 00:12:32.859 "is_configured": true, 00:12:32.859 "data_offset": 2048, 00:12:32.859 "data_size": 63488 00:12:32.859 } 00:12:32.859 ] 00:12:32.859 }' 00:12:32.859 06:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.859 06:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.427 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:33.686 [2024-08-13 06:07:35.319566] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.686 [2024-08-13 06:07:35.319616] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.686 [2024-08-13 06:07:35.321872] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.686 [2024-08-13 06:07:35.321920] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.686 [2024-08-13 06:07:35.321991] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.686 [2024-08-13 06:07:35.322014] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:12:33.686 0 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 82992 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 82992 ']' 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 82992 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82992 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:33.686 killing process with pid 82992 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82992' 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 82992 00:12:33.686 [2024-08-13 06:07:35.378147] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.686 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 82992 00:12:33.686 [2024-08-13 06:07:35.403422] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.g5mXvw5vG7 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:33.946 00:12:33.946 real 0m6.226s 00:12:33.946 user 0m9.648s 00:12:33.946 sys 0m1.005s 00:12:33.946 ************************************ 00:12:33.946 END TEST raid_write_error_test 00:12:33.946 ************************************ 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:33.946 06:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 06:07:35 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:12:33.946 06:07:35 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:12:33.946 06:07:35 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:33.946 06:07:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:33.946 06:07:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:33.946 06:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.946 ************************************ 00:12:33.946 START TEST raid_state_function_test 00:12:33.946 ************************************ 00:12:33.946 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:12:33.946 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:33.946 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:33.946 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:33.946 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=83164 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 83164' 00:12:34.206 Process raid pid: 83164 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 83164 /var/tmp/spdk-raid.sock 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 83164 ']' 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:34.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.206 06:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.206 [2024-08-13 06:07:35.838380] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:12:34.206 [2024-08-13 06:07:35.838636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.206 [2024-08-13 06:07:35.987844] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.465 [2024-08-13 06:07:36.034320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.465 [2024-08-13 06:07:36.077531] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.465 [2024-08-13 06:07:36.077658] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.032 06:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:35.032 06:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:12:35.032 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:35.291 [2024-08-13 06:07:36.833565] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.291 [2024-08-13 06:07:36.833682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.291 [2024-08-13 06:07:36.833715] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.291 [2024-08-13 06:07:36.833736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.291 [2024-08-13 06:07:36.833759] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:35.291 [2024-08-13 06:07:36.833778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:35.291 [2024-08-13 06:07:36.833800] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:35.291 [2024-08-13 06:07:36.833818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.291 06:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.291 06:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.291 "name": "Existed_Raid", 00:12:35.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.291 "strip_size_kb": 64, 00:12:35.291 "state": "configuring", 00:12:35.291 "raid_level": "raid0", 00:12:35.291 "superblock": false, 00:12:35.291 "num_base_bdevs": 4, 00:12:35.291 "num_base_bdevs_discovered": 0, 00:12:35.291 "num_base_bdevs_operational": 4, 00:12:35.291 "base_bdevs_list": [ 00:12:35.291 { 00:12:35.291 "name": "BaseBdev1", 00:12:35.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.291 "is_configured": false, 00:12:35.291 "data_offset": 0, 00:12:35.291 "data_size": 0 00:12:35.291 }, 00:12:35.291 { 00:12:35.291 "name": "BaseBdev2", 00:12:35.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.291 "is_configured": false, 00:12:35.291 "data_offset": 0, 00:12:35.291 "data_size": 0 00:12:35.291 }, 00:12:35.291 { 00:12:35.291 "name": "BaseBdev3", 00:12:35.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.291 "is_configured": false, 00:12:35.291 "data_offset": 0, 00:12:35.291 "data_size": 0 00:12:35.291 }, 00:12:35.291 { 00:12:35.291 "name": "BaseBdev4", 00:12:35.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.291 "is_configured": false, 00:12:35.291 "data_offset": 0, 00:12:35.291 "data_size": 0 00:12:35.291 } 00:12:35.291 ] 00:12:35.291 }' 00:12:35.291 06:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.291 06:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 06:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:36.116 [2024-08-13 06:07:37.747829] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.116 [2024-08-13 06:07:37.747907] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:36.116 06:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:36.375 [2024-08-13 06:07:37.939503] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.375 [2024-08-13 06:07:37.939575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.375 [2024-08-13 06:07:37.939604] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.375 [2024-08-13 06:07:37.939623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.375 [2024-08-13 06:07:37.939640] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:36.375 [2024-08-13 06:07:37.939657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:36.375 [2024-08-13 06:07:37.939675] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:36.375 [2024-08-13 06:07:37.939707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:36.375 06:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.375 [2024-08-13 06:07:38.123747] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.375 BaseBdev1 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:36.375 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:36.634 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.892 [ 00:12:36.892 { 00:12:36.892 "name": "BaseBdev1", 00:12:36.892 "aliases": [ 00:12:36.892 "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5" 00:12:36.892 ], 00:12:36.892 "product_name": "Malloc disk", 00:12:36.892 "block_size": 512, 00:12:36.892 "num_blocks": 65536, 00:12:36.892 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:36.892 "assigned_rate_limits": { 00:12:36.892 "rw_ios_per_sec": 0, 00:12:36.892 "rw_mbytes_per_sec": 0, 00:12:36.892 "r_mbytes_per_sec": 0, 00:12:36.892 "w_mbytes_per_sec": 0 00:12:36.892 }, 00:12:36.892 "claimed": true, 00:12:36.892 "claim_type": "exclusive_write", 00:12:36.892 "zoned": false, 00:12:36.892 "supported_io_types": { 00:12:36.892 "read": true, 00:12:36.892 "write": true, 00:12:36.892 "unmap": true, 00:12:36.892 "flush": true, 00:12:36.892 "reset": true, 00:12:36.892 "nvme_admin": false, 00:12:36.892 "nvme_io": false, 00:12:36.892 "nvme_io_md": false, 00:12:36.892 "write_zeroes": true, 00:12:36.892 "zcopy": true, 00:12:36.892 "get_zone_info": false, 00:12:36.892 "zone_management": false, 00:12:36.892 "zone_append": false, 00:12:36.892 "compare": false, 00:12:36.892 "compare_and_write": false, 00:12:36.892 "abort": true, 00:12:36.892 "seek_hole": false, 00:12:36.892 "seek_data": false, 00:12:36.892 "copy": true, 00:12:36.892 "nvme_iov_md": false 00:12:36.892 }, 00:12:36.892 "memory_domains": [ 00:12:36.892 { 00:12:36.892 "dma_device_id": "system", 00:12:36.892 "dma_device_type": 1 00:12:36.892 }, 00:12:36.892 { 00:12:36.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.892 "dma_device_type": 2 00:12:36.892 } 00:12:36.892 ], 00:12:36.892 "driver_specific": {} 00:12:36.892 } 00:12:36.892 ] 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.892 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.151 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.151 "name": "Existed_Raid", 00:12:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.151 "strip_size_kb": 64, 00:12:37.151 "state": "configuring", 00:12:37.151 "raid_level": "raid0", 00:12:37.151 "superblock": false, 00:12:37.151 "num_base_bdevs": 4, 00:12:37.151 "num_base_bdevs_discovered": 1, 00:12:37.151 "num_base_bdevs_operational": 4, 00:12:37.151 "base_bdevs_list": [ 00:12:37.151 { 00:12:37.151 "name": "BaseBdev1", 00:12:37.151 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:37.151 "is_configured": true, 00:12:37.151 "data_offset": 0, 00:12:37.151 "data_size": 65536 00:12:37.151 }, 00:12:37.151 { 00:12:37.151 "name": "BaseBdev2", 00:12:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.151 "is_configured": false, 00:12:37.151 "data_offset": 0, 00:12:37.151 "data_size": 0 00:12:37.151 }, 00:12:37.151 { 00:12:37.151 "name": "BaseBdev3", 00:12:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.151 "is_configured": false, 00:12:37.151 "data_offset": 0, 00:12:37.151 "data_size": 0 00:12:37.151 }, 00:12:37.151 { 00:12:37.151 "name": "BaseBdev4", 00:12:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.151 "is_configured": false, 00:12:37.151 "data_offset": 0, 00:12:37.151 "data_size": 0 00:12:37.151 } 00:12:37.151 ] 00:12:37.151 }' 00:12:37.151 06:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.151 06:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.719 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:37.719 [2024-08-13 06:07:39.433575] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:37.719 [2024-08-13 06:07:39.433706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:37.719 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:37.978 [2024-08-13 06:07:39.621301] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.978 [2024-08-13 06:07:39.623130] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.978 [2024-08-13 06:07:39.623216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.978 [2024-08-13 06:07:39.623244] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.978 [2024-08-13 06:07:39.623268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.978 [2024-08-13 06:07:39.623303] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:37.978 [2024-08-13 06:07:39.623323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.978 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.979 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.979 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.979 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.238 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.238 "name": "Existed_Raid", 00:12:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.238 "strip_size_kb": 64, 00:12:38.238 "state": "configuring", 00:12:38.238 "raid_level": "raid0", 00:12:38.238 "superblock": false, 00:12:38.238 "num_base_bdevs": 4, 00:12:38.238 "num_base_bdevs_discovered": 1, 00:12:38.238 "num_base_bdevs_operational": 4, 00:12:38.238 "base_bdevs_list": [ 00:12:38.238 { 00:12:38.238 "name": "BaseBdev1", 00:12:38.238 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:38.238 "is_configured": true, 00:12:38.238 "data_offset": 0, 00:12:38.238 "data_size": 65536 00:12:38.238 }, 00:12:38.238 { 00:12:38.238 "name": "BaseBdev2", 00:12:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.238 "is_configured": false, 00:12:38.238 "data_offset": 0, 00:12:38.238 "data_size": 0 00:12:38.238 }, 00:12:38.238 { 00:12:38.238 "name": "BaseBdev3", 00:12:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.238 "is_configured": false, 00:12:38.238 "data_offset": 0, 00:12:38.238 "data_size": 0 00:12:38.238 }, 00:12:38.238 { 00:12:38.238 "name": "BaseBdev4", 00:12:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.238 "is_configured": false, 00:12:38.238 "data_offset": 0, 00:12:38.238 "data_size": 0 00:12:38.238 } 00:12:38.238 ] 00:12:38.238 }' 00:12:38.238 06:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.238 06:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.807 06:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.807 [2024-08-13 06:07:40.594785] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.066 BaseBdev2 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:39.066 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.326 [ 00:12:39.326 { 00:12:39.326 "name": "BaseBdev2", 00:12:39.326 "aliases": [ 00:12:39.326 "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d" 00:12:39.326 ], 00:12:39.326 "product_name": "Malloc disk", 00:12:39.326 "block_size": 512, 00:12:39.326 "num_blocks": 65536, 00:12:39.326 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:39.326 "assigned_rate_limits": { 00:12:39.326 "rw_ios_per_sec": 0, 00:12:39.326 "rw_mbytes_per_sec": 0, 00:12:39.326 "r_mbytes_per_sec": 0, 00:12:39.326 "w_mbytes_per_sec": 0 00:12:39.326 }, 00:12:39.326 "claimed": true, 00:12:39.326 "claim_type": "exclusive_write", 00:12:39.326 "zoned": false, 00:12:39.326 "supported_io_types": { 00:12:39.326 "read": true, 00:12:39.326 "write": true, 00:12:39.326 "unmap": true, 00:12:39.326 "flush": true, 00:12:39.326 "reset": true, 00:12:39.326 "nvme_admin": false, 00:12:39.326 "nvme_io": false, 00:12:39.326 "nvme_io_md": false, 00:12:39.326 "write_zeroes": true, 00:12:39.326 "zcopy": true, 00:12:39.326 "get_zone_info": false, 00:12:39.326 "zone_management": false, 00:12:39.326 "zone_append": false, 00:12:39.326 "compare": false, 00:12:39.326 "compare_and_write": false, 00:12:39.326 "abort": true, 00:12:39.326 "seek_hole": false, 00:12:39.326 "seek_data": false, 00:12:39.326 "copy": true, 00:12:39.326 "nvme_iov_md": false 00:12:39.326 }, 00:12:39.326 "memory_domains": [ 00:12:39.326 { 00:12:39.326 "dma_device_id": "system", 00:12:39.326 "dma_device_type": 1 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.326 "dma_device_type": 2 00:12:39.326 } 00:12:39.326 ], 00:12:39.326 "driver_specific": {} 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 06:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:39.326 06:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.326 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.327 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.586 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.586 "name": "Existed_Raid", 00:12:39.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.586 "strip_size_kb": 64, 00:12:39.586 "state": "configuring", 00:12:39.586 "raid_level": "raid0", 00:12:39.586 "superblock": false, 00:12:39.586 "num_base_bdevs": 4, 00:12:39.586 "num_base_bdevs_discovered": 2, 00:12:39.586 "num_base_bdevs_operational": 4, 00:12:39.586 "base_bdevs_list": [ 00:12:39.586 { 00:12:39.586 "name": "BaseBdev1", 00:12:39.586 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:39.586 "is_configured": true, 00:12:39.586 "data_offset": 0, 00:12:39.586 "data_size": 65536 00:12:39.586 }, 00:12:39.586 { 00:12:39.586 "name": "BaseBdev2", 00:12:39.586 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:39.586 "is_configured": true, 00:12:39.586 "data_offset": 0, 00:12:39.586 "data_size": 65536 00:12:39.586 }, 00:12:39.586 { 00:12:39.586 "name": "BaseBdev3", 00:12:39.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.586 "is_configured": false, 00:12:39.586 "data_offset": 0, 00:12:39.586 "data_size": 0 00:12:39.586 }, 00:12:39.586 { 00:12:39.586 "name": "BaseBdev4", 00:12:39.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.586 "is_configured": false, 00:12:39.586 "data_offset": 0, 00:12:39.586 "data_size": 0 00:12:39.586 } 00:12:39.586 ] 00:12:39.586 }' 00:12:39.586 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.586 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:40.167 [2024-08-13 06:07:41.931711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.167 BaseBdev3 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:40.167 06:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.446 06:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:40.705 [ 00:12:40.705 { 00:12:40.705 "name": "BaseBdev3", 00:12:40.705 "aliases": [ 00:12:40.705 "ed681822-1bdc-4b7b-83c0-b00eca006ba3" 00:12:40.705 ], 00:12:40.705 "product_name": "Malloc disk", 00:12:40.705 "block_size": 512, 00:12:40.705 "num_blocks": 65536, 00:12:40.706 "uuid": "ed681822-1bdc-4b7b-83c0-b00eca006ba3", 00:12:40.706 "assigned_rate_limits": { 00:12:40.706 "rw_ios_per_sec": 0, 00:12:40.706 "rw_mbytes_per_sec": 0, 00:12:40.706 "r_mbytes_per_sec": 0, 00:12:40.706 "w_mbytes_per_sec": 0 00:12:40.706 }, 00:12:40.706 "claimed": true, 00:12:40.706 "claim_type": "exclusive_write", 00:12:40.706 "zoned": false, 00:12:40.706 "supported_io_types": { 00:12:40.706 "read": true, 00:12:40.706 "write": true, 00:12:40.706 "unmap": true, 00:12:40.706 "flush": true, 00:12:40.706 "reset": true, 00:12:40.706 "nvme_admin": false, 00:12:40.706 "nvme_io": false, 00:12:40.706 "nvme_io_md": false, 00:12:40.706 "write_zeroes": true, 00:12:40.706 "zcopy": true, 00:12:40.706 "get_zone_info": false, 00:12:40.706 "zone_management": false, 00:12:40.706 "zone_append": false, 00:12:40.706 "compare": false, 00:12:40.706 "compare_and_write": false, 00:12:40.706 "abort": true, 00:12:40.706 "seek_hole": false, 00:12:40.706 "seek_data": false, 00:12:40.706 "copy": true, 00:12:40.706 "nvme_iov_md": false 00:12:40.706 }, 00:12:40.706 "memory_domains": [ 00:12:40.706 { 00:12:40.706 "dma_device_id": "system", 00:12:40.706 "dma_device_type": 1 00:12:40.706 }, 00:12:40.706 { 00:12:40.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.706 "dma_device_type": 2 00:12:40.706 } 00:12:40.706 ], 00:12:40.706 "driver_specific": {} 00:12:40.706 } 00:12:40.706 ] 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.706 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.965 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.965 "name": "Existed_Raid", 00:12:40.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.965 "strip_size_kb": 64, 00:12:40.965 "state": "configuring", 00:12:40.965 "raid_level": "raid0", 00:12:40.965 "superblock": false, 00:12:40.965 "num_base_bdevs": 4, 00:12:40.965 "num_base_bdevs_discovered": 3, 00:12:40.965 "num_base_bdevs_operational": 4, 00:12:40.965 "base_bdevs_list": [ 00:12:40.965 { 00:12:40.965 "name": "BaseBdev1", 00:12:40.965 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:40.965 "is_configured": true, 00:12:40.965 "data_offset": 0, 00:12:40.965 "data_size": 65536 00:12:40.965 }, 00:12:40.965 { 00:12:40.965 "name": "BaseBdev2", 00:12:40.965 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:40.965 "is_configured": true, 00:12:40.965 "data_offset": 0, 00:12:40.965 "data_size": 65536 00:12:40.965 }, 00:12:40.965 { 00:12:40.965 "name": "BaseBdev3", 00:12:40.965 "uuid": "ed681822-1bdc-4b7b-83c0-b00eca006ba3", 00:12:40.965 "is_configured": true, 00:12:40.965 "data_offset": 0, 00:12:40.965 "data_size": 65536 00:12:40.965 }, 00:12:40.965 { 00:12:40.965 "name": "BaseBdev4", 00:12:40.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.965 "is_configured": false, 00:12:40.965 "data_offset": 0, 00:12:40.965 "data_size": 0 00:12:40.965 } 00:12:40.965 ] 00:12:40.965 }' 00:12:40.965 06:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.965 06:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.534 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:41.534 [2024-08-13 06:07:43.304475] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.534 [2024-08-13 06:07:43.304597] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:41.534 [2024-08-13 06:07:43.304627] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:41.534 [2024-08-13 06:07:43.304952] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:12:41.534 [2024-08-13 06:07:43.305142] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:41.534 [2024-08-13 06:07:43.305194] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:41.534 [2024-08-13 06:07:43.305404] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.534 BaseBdev4 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.793 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:42.053 [ 00:12:42.053 { 00:12:42.053 "name": "BaseBdev4", 00:12:42.053 "aliases": [ 00:12:42.053 "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9" 00:12:42.053 ], 00:12:42.053 "product_name": "Malloc disk", 00:12:42.053 "block_size": 512, 00:12:42.053 "num_blocks": 65536, 00:12:42.053 "uuid": "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9", 00:12:42.053 "assigned_rate_limits": { 00:12:42.053 "rw_ios_per_sec": 0, 00:12:42.053 "rw_mbytes_per_sec": 0, 00:12:42.053 "r_mbytes_per_sec": 0, 00:12:42.053 "w_mbytes_per_sec": 0 00:12:42.053 }, 00:12:42.053 "claimed": true, 00:12:42.053 "claim_type": "exclusive_write", 00:12:42.053 "zoned": false, 00:12:42.053 "supported_io_types": { 00:12:42.053 "read": true, 00:12:42.053 "write": true, 00:12:42.053 "unmap": true, 00:12:42.053 "flush": true, 00:12:42.053 "reset": true, 00:12:42.053 "nvme_admin": false, 00:12:42.053 "nvme_io": false, 00:12:42.053 "nvme_io_md": false, 00:12:42.053 "write_zeroes": true, 00:12:42.053 "zcopy": true, 00:12:42.053 "get_zone_info": false, 00:12:42.053 "zone_management": false, 00:12:42.053 "zone_append": false, 00:12:42.053 "compare": false, 00:12:42.053 "compare_and_write": false, 00:12:42.053 "abort": true, 00:12:42.053 "seek_hole": false, 00:12:42.053 "seek_data": false, 00:12:42.053 "copy": true, 00:12:42.053 "nvme_iov_md": false 00:12:42.053 }, 00:12:42.053 "memory_domains": [ 00:12:42.053 { 00:12:42.053 "dma_device_id": "system", 00:12:42.053 "dma_device_type": 1 00:12:42.053 }, 00:12:42.053 { 00:12:42.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.053 "dma_device_type": 2 00:12:42.053 } 00:12:42.053 ], 00:12:42.053 "driver_specific": {} 00:12:42.053 } 00:12:42.053 ] 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.053 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.312 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.312 "name": "Existed_Raid", 00:12:42.312 "uuid": "660cf035-08a1-4f41-9c5e-f1dc1c53d488", 00:12:42.312 "strip_size_kb": 64, 00:12:42.312 "state": "online", 00:12:42.312 "raid_level": "raid0", 00:12:42.312 "superblock": false, 00:12:42.312 "num_base_bdevs": 4, 00:12:42.312 "num_base_bdevs_discovered": 4, 00:12:42.312 "num_base_bdevs_operational": 4, 00:12:42.312 "base_bdevs_list": [ 00:12:42.312 { 00:12:42.312 "name": "BaseBdev1", 00:12:42.312 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:42.312 "is_configured": true, 00:12:42.312 "data_offset": 0, 00:12:42.312 "data_size": 65536 00:12:42.312 }, 00:12:42.312 { 00:12:42.312 "name": "BaseBdev2", 00:12:42.312 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:42.312 "is_configured": true, 00:12:42.312 "data_offset": 0, 00:12:42.312 "data_size": 65536 00:12:42.312 }, 00:12:42.312 { 00:12:42.312 "name": "BaseBdev3", 00:12:42.312 "uuid": "ed681822-1bdc-4b7b-83c0-b00eca006ba3", 00:12:42.312 "is_configured": true, 00:12:42.312 "data_offset": 0, 00:12:42.312 "data_size": 65536 00:12:42.312 }, 00:12:42.312 { 00:12:42.312 "name": "BaseBdev4", 00:12:42.312 "uuid": "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9", 00:12:42.312 "is_configured": true, 00:12:42.312 "data_offset": 0, 00:12:42.312 "data_size": 65536 00:12:42.312 } 00:12:42.312 ] 00:12:42.312 }' 00:12:42.312 06:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.312 06:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:42.880 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:43.140 [2024-08-13 06:07:44.674487] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.140 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:43.140 "name": "Existed_Raid", 00:12:43.140 "aliases": [ 00:12:43.140 "660cf035-08a1-4f41-9c5e-f1dc1c53d488" 00:12:43.140 ], 00:12:43.140 "product_name": "Raid Volume", 00:12:43.140 "block_size": 512, 00:12:43.140 "num_blocks": 262144, 00:12:43.140 "uuid": "660cf035-08a1-4f41-9c5e-f1dc1c53d488", 00:12:43.140 "assigned_rate_limits": { 00:12:43.140 "rw_ios_per_sec": 0, 00:12:43.140 "rw_mbytes_per_sec": 0, 00:12:43.140 "r_mbytes_per_sec": 0, 00:12:43.140 "w_mbytes_per_sec": 0 00:12:43.140 }, 00:12:43.140 "claimed": false, 00:12:43.140 "zoned": false, 00:12:43.140 "supported_io_types": { 00:12:43.140 "read": true, 00:12:43.140 "write": true, 00:12:43.140 "unmap": true, 00:12:43.140 "flush": true, 00:12:43.140 "reset": true, 00:12:43.140 "nvme_admin": false, 00:12:43.140 "nvme_io": false, 00:12:43.140 "nvme_io_md": false, 00:12:43.140 "write_zeroes": true, 00:12:43.140 "zcopy": false, 00:12:43.140 "get_zone_info": false, 00:12:43.140 "zone_management": false, 00:12:43.140 "zone_append": false, 00:12:43.140 "compare": false, 00:12:43.140 "compare_and_write": false, 00:12:43.140 "abort": false, 00:12:43.140 "seek_hole": false, 00:12:43.140 "seek_data": false, 00:12:43.140 "copy": false, 00:12:43.140 "nvme_iov_md": false 00:12:43.140 }, 00:12:43.140 "memory_domains": [ 00:12:43.140 { 00:12:43.140 "dma_device_id": "system", 00:12:43.140 "dma_device_type": 1 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.140 "dma_device_type": 2 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "system", 00:12:43.140 "dma_device_type": 1 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.140 "dma_device_type": 2 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "system", 00:12:43.140 "dma_device_type": 1 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.140 "dma_device_type": 2 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "system", 00:12:43.140 "dma_device_type": 1 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.140 "dma_device_type": 2 00:12:43.140 } 00:12:43.140 ], 00:12:43.140 "driver_specific": { 00:12:43.140 "raid": { 00:12:43.140 "uuid": "660cf035-08a1-4f41-9c5e-f1dc1c53d488", 00:12:43.140 "strip_size_kb": 64, 00:12:43.140 "state": "online", 00:12:43.140 "raid_level": "raid0", 00:12:43.140 "superblock": false, 00:12:43.140 "num_base_bdevs": 4, 00:12:43.140 "num_base_bdevs_discovered": 4, 00:12:43.140 "num_base_bdevs_operational": 4, 00:12:43.140 "base_bdevs_list": [ 00:12:43.140 { 00:12:43.140 "name": "BaseBdev1", 00:12:43.140 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:43.140 "is_configured": true, 00:12:43.140 "data_offset": 0, 00:12:43.140 "data_size": 65536 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "name": "BaseBdev2", 00:12:43.140 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:43.140 "is_configured": true, 00:12:43.140 "data_offset": 0, 00:12:43.140 "data_size": 65536 00:12:43.140 }, 00:12:43.140 { 00:12:43.140 "name": "BaseBdev3", 00:12:43.140 "uuid": "ed681822-1bdc-4b7b-83c0-b00eca006ba3", 00:12:43.141 "is_configured": true, 00:12:43.141 "data_offset": 0, 00:12:43.141 "data_size": 65536 00:12:43.141 }, 00:12:43.141 { 00:12:43.141 "name": "BaseBdev4", 00:12:43.141 "uuid": "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9", 00:12:43.141 "is_configured": true, 00:12:43.141 "data_offset": 0, 00:12:43.141 "data_size": 65536 00:12:43.141 } 00:12:43.141 ] 00:12:43.141 } 00:12:43.141 } 00:12:43.141 }' 00:12:43.141 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.141 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:43.141 BaseBdev2 00:12:43.141 BaseBdev3 00:12:43.141 BaseBdev4' 00:12:43.141 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.141 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:43.141 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.400 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.400 "name": "BaseBdev1", 00:12:43.400 "aliases": [ 00:12:43.400 "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5" 00:12:43.400 ], 00:12:43.400 "product_name": "Malloc disk", 00:12:43.400 "block_size": 512, 00:12:43.400 "num_blocks": 65536, 00:12:43.400 "uuid": "a6bf4970-4fd8-4ed3-a907-ad0145ef09f5", 00:12:43.400 "assigned_rate_limits": { 00:12:43.400 "rw_ios_per_sec": 0, 00:12:43.400 "rw_mbytes_per_sec": 0, 00:12:43.400 "r_mbytes_per_sec": 0, 00:12:43.400 "w_mbytes_per_sec": 0 00:12:43.400 }, 00:12:43.400 "claimed": true, 00:12:43.400 "claim_type": "exclusive_write", 00:12:43.400 "zoned": false, 00:12:43.400 "supported_io_types": { 00:12:43.400 "read": true, 00:12:43.400 "write": true, 00:12:43.400 "unmap": true, 00:12:43.400 "flush": true, 00:12:43.400 "reset": true, 00:12:43.400 "nvme_admin": false, 00:12:43.400 "nvme_io": false, 00:12:43.400 "nvme_io_md": false, 00:12:43.400 "write_zeroes": true, 00:12:43.400 "zcopy": true, 00:12:43.400 "get_zone_info": false, 00:12:43.400 "zone_management": false, 00:12:43.400 "zone_append": false, 00:12:43.400 "compare": false, 00:12:43.400 "compare_and_write": false, 00:12:43.400 "abort": true, 00:12:43.400 "seek_hole": false, 00:12:43.400 "seek_data": false, 00:12:43.400 "copy": true, 00:12:43.400 "nvme_iov_md": false 00:12:43.400 }, 00:12:43.400 "memory_domains": [ 00:12:43.400 { 00:12:43.400 "dma_device_id": "system", 00:12:43.400 "dma_device_type": 1 00:12:43.400 }, 00:12:43.400 { 00:12:43.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.400 "dma_device_type": 2 00:12:43.400 } 00:12:43.400 ], 00:12:43.400 "driver_specific": {} 00:12:43.400 }' 00:12:43.400 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.400 06:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.400 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.659 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.659 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.659 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.659 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.659 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:43.918 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.918 "name": "BaseBdev2", 00:12:43.918 "aliases": [ 00:12:43.918 "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d" 00:12:43.918 ], 00:12:43.918 "product_name": "Malloc disk", 00:12:43.918 "block_size": 512, 00:12:43.918 "num_blocks": 65536, 00:12:43.918 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:43.918 "assigned_rate_limits": { 00:12:43.918 "rw_ios_per_sec": 0, 00:12:43.918 "rw_mbytes_per_sec": 0, 00:12:43.918 "r_mbytes_per_sec": 0, 00:12:43.918 "w_mbytes_per_sec": 0 00:12:43.918 }, 00:12:43.918 "claimed": true, 00:12:43.918 "claim_type": "exclusive_write", 00:12:43.918 "zoned": false, 00:12:43.918 "supported_io_types": { 00:12:43.918 "read": true, 00:12:43.918 "write": true, 00:12:43.918 "unmap": true, 00:12:43.918 "flush": true, 00:12:43.918 "reset": true, 00:12:43.918 "nvme_admin": false, 00:12:43.918 "nvme_io": false, 00:12:43.918 "nvme_io_md": false, 00:12:43.918 "write_zeroes": true, 00:12:43.918 "zcopy": true, 00:12:43.918 "get_zone_info": false, 00:12:43.918 "zone_management": false, 00:12:43.918 "zone_append": false, 00:12:43.918 "compare": false, 00:12:43.918 "compare_and_write": false, 00:12:43.918 "abort": true, 00:12:43.918 "seek_hole": false, 00:12:43.918 "seek_data": false, 00:12:43.918 "copy": true, 00:12:43.918 "nvme_iov_md": false 00:12:43.918 }, 00:12:43.918 "memory_domains": [ 00:12:43.918 { 00:12:43.918 "dma_device_id": "system", 00:12:43.918 "dma_device_type": 1 00:12:43.918 }, 00:12:43.918 { 00:12:43.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.918 "dma_device_type": 2 00:12:43.918 } 00:12:43.918 ], 00:12:43.918 "driver_specific": {} 00:12:43.918 }' 00:12:43.918 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.918 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.918 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.919 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.919 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.919 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.919 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.919 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:44.178 06:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.437 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.437 "name": "BaseBdev3", 00:12:44.437 "aliases": [ 00:12:44.437 "ed681822-1bdc-4b7b-83c0-b00eca006ba3" 00:12:44.437 ], 00:12:44.437 "product_name": "Malloc disk", 00:12:44.437 "block_size": 512, 00:12:44.437 "num_blocks": 65536, 00:12:44.437 "uuid": "ed681822-1bdc-4b7b-83c0-b00eca006ba3", 00:12:44.437 "assigned_rate_limits": { 00:12:44.437 "rw_ios_per_sec": 0, 00:12:44.437 "rw_mbytes_per_sec": 0, 00:12:44.437 "r_mbytes_per_sec": 0, 00:12:44.437 "w_mbytes_per_sec": 0 00:12:44.437 }, 00:12:44.437 "claimed": true, 00:12:44.437 "claim_type": "exclusive_write", 00:12:44.437 "zoned": false, 00:12:44.437 "supported_io_types": { 00:12:44.437 "read": true, 00:12:44.437 "write": true, 00:12:44.437 "unmap": true, 00:12:44.437 "flush": true, 00:12:44.437 "reset": true, 00:12:44.437 "nvme_admin": false, 00:12:44.437 "nvme_io": false, 00:12:44.437 "nvme_io_md": false, 00:12:44.437 "write_zeroes": true, 00:12:44.437 "zcopy": true, 00:12:44.437 "get_zone_info": false, 00:12:44.437 "zone_management": false, 00:12:44.437 "zone_append": false, 00:12:44.437 "compare": false, 00:12:44.437 "compare_and_write": false, 00:12:44.437 "abort": true, 00:12:44.437 "seek_hole": false, 00:12:44.437 "seek_data": false, 00:12:44.437 "copy": true, 00:12:44.437 "nvme_iov_md": false 00:12:44.437 }, 00:12:44.437 "memory_domains": [ 00:12:44.437 { 00:12:44.437 "dma_device_id": "system", 00:12:44.437 "dma_device_type": 1 00:12:44.437 }, 00:12:44.437 { 00:12:44.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.437 "dma_device_type": 2 00:12:44.437 } 00:12:44.437 ], 00:12:44.437 "driver_specific": {} 00:12:44.437 }' 00:12:44.437 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.437 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.437 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.437 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.437 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:44.697 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.956 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.956 "name": "BaseBdev4", 00:12:44.956 "aliases": [ 00:12:44.956 "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9" 00:12:44.956 ], 00:12:44.956 "product_name": "Malloc disk", 00:12:44.956 "block_size": 512, 00:12:44.956 "num_blocks": 65536, 00:12:44.956 "uuid": "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9", 00:12:44.956 "assigned_rate_limits": { 00:12:44.956 "rw_ios_per_sec": 0, 00:12:44.956 "rw_mbytes_per_sec": 0, 00:12:44.956 "r_mbytes_per_sec": 0, 00:12:44.956 "w_mbytes_per_sec": 0 00:12:44.956 }, 00:12:44.956 "claimed": true, 00:12:44.956 "claim_type": "exclusive_write", 00:12:44.956 "zoned": false, 00:12:44.956 "supported_io_types": { 00:12:44.956 "read": true, 00:12:44.956 "write": true, 00:12:44.956 "unmap": true, 00:12:44.956 "flush": true, 00:12:44.956 "reset": true, 00:12:44.956 "nvme_admin": false, 00:12:44.956 "nvme_io": false, 00:12:44.956 "nvme_io_md": false, 00:12:44.956 "write_zeroes": true, 00:12:44.956 "zcopy": true, 00:12:44.956 "get_zone_info": false, 00:12:44.956 "zone_management": false, 00:12:44.956 "zone_append": false, 00:12:44.956 "compare": false, 00:12:44.956 "compare_and_write": false, 00:12:44.956 "abort": true, 00:12:44.956 "seek_hole": false, 00:12:44.956 "seek_data": false, 00:12:44.956 "copy": true, 00:12:44.956 "nvme_iov_md": false 00:12:44.956 }, 00:12:44.956 "memory_domains": [ 00:12:44.956 { 00:12:44.956 "dma_device_id": "system", 00:12:44.956 "dma_device_type": 1 00:12:44.956 }, 00:12:44.956 { 00:12:44.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.956 "dma_device_type": 2 00:12:44.956 } 00:12:44.956 ], 00:12:44.956 "driver_specific": {} 00:12:44.956 }' 00:12:44.956 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.956 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.956 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.956 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:45.216 06:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:45.476 [2024-08-13 06:07:47.146218] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.476 [2024-08-13 06:07:47.146291] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.476 [2024-08-13 06:07:47.146381] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.476 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.735 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.735 "name": "Existed_Raid", 00:12:45.735 "uuid": "660cf035-08a1-4f41-9c5e-f1dc1c53d488", 00:12:45.735 "strip_size_kb": 64, 00:12:45.735 "state": "offline", 00:12:45.735 "raid_level": "raid0", 00:12:45.735 "superblock": false, 00:12:45.735 "num_base_bdevs": 4, 00:12:45.735 "num_base_bdevs_discovered": 3, 00:12:45.735 "num_base_bdevs_operational": 3, 00:12:45.735 "base_bdevs_list": [ 00:12:45.735 { 00:12:45.735 "name": null, 00:12:45.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.735 "is_configured": false, 00:12:45.735 "data_offset": 0, 00:12:45.735 "data_size": 65536 00:12:45.735 }, 00:12:45.735 { 00:12:45.735 "name": "BaseBdev2", 00:12:45.735 "uuid": "d10a849a-83f0-4e2c-9eb9-e8f57cc8922d", 00:12:45.735 "is_configured": true, 00:12:45.735 "data_offset": 0, 00:12:45.735 "data_size": 65536 00:12:45.735 }, 00:12:45.735 { 00:12:45.735 "name": "BaseBdev3", 00:12:45.735 "uuid": "ed681822-1bdc-4b7b-83c0-b00eca006ba3", 00:12:45.735 "is_configured": true, 00:12:45.735 "data_offset": 0, 00:12:45.735 "data_size": 65536 00:12:45.735 }, 00:12:45.735 { 00:12:45.735 "name": "BaseBdev4", 00:12:45.735 "uuid": "60ac275c-b9bd-4c64-ade6-a8f9c2a167b9", 00:12:45.735 "is_configured": true, 00:12:45.735 "data_offset": 0, 00:12:45.735 "data_size": 65536 00:12:45.735 } 00:12:45.735 ] 00:12:45.735 }' 00:12:45.735 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.735 06:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.303 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:46.303 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:46.303 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.303 06:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:46.563 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:46.563 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.563 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:46.563 [2024-08-13 06:07:48.335431] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.822 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:47.081 [2024-08-13 06:07:48.737833] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.081 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:47.081 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:47.081 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.081 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:47.340 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:47.340 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.340 06:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:47.599 [2024-08-13 06:07:49.132138] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:47.599 [2024-08-13 06:07:49.132192] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:47.599 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:47.599 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:47.599 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.599 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.859 BaseBdev2 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:47.859 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.118 06:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.377 [ 00:12:48.377 { 00:12:48.377 "name": "BaseBdev2", 00:12:48.377 "aliases": [ 00:12:48.377 "6eac8856-1900-46e8-97e2-2ad19096a9ea" 00:12:48.377 ], 00:12:48.377 "product_name": "Malloc disk", 00:12:48.377 "block_size": 512, 00:12:48.377 "num_blocks": 65536, 00:12:48.377 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:48.377 "assigned_rate_limits": { 00:12:48.377 "rw_ios_per_sec": 0, 00:12:48.377 "rw_mbytes_per_sec": 0, 00:12:48.377 "r_mbytes_per_sec": 0, 00:12:48.377 "w_mbytes_per_sec": 0 00:12:48.377 }, 00:12:48.377 "claimed": false, 00:12:48.377 "zoned": false, 00:12:48.377 "supported_io_types": { 00:12:48.377 "read": true, 00:12:48.377 "write": true, 00:12:48.377 "unmap": true, 00:12:48.377 "flush": true, 00:12:48.377 "reset": true, 00:12:48.377 "nvme_admin": false, 00:12:48.377 "nvme_io": false, 00:12:48.377 "nvme_io_md": false, 00:12:48.377 "write_zeroes": true, 00:12:48.377 "zcopy": true, 00:12:48.377 "get_zone_info": false, 00:12:48.378 "zone_management": false, 00:12:48.378 "zone_append": false, 00:12:48.378 "compare": false, 00:12:48.378 "compare_and_write": false, 00:12:48.378 "abort": true, 00:12:48.378 "seek_hole": false, 00:12:48.378 "seek_data": false, 00:12:48.378 "copy": true, 00:12:48.378 "nvme_iov_md": false 00:12:48.378 }, 00:12:48.378 "memory_domains": [ 00:12:48.378 { 00:12:48.378 "dma_device_id": "system", 00:12:48.378 "dma_device_type": 1 00:12:48.378 }, 00:12:48.378 { 00:12:48.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.378 "dma_device_type": 2 00:12:48.378 } 00:12:48.378 ], 00:12:48.378 "driver_specific": {} 00:12:48.378 } 00:12:48.378 ] 00:12:48.378 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:48.378 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:48.378 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:48.378 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.378 BaseBdev3 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.637 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.896 [ 00:12:48.896 { 00:12:48.896 "name": "BaseBdev3", 00:12:48.896 "aliases": [ 00:12:48.896 "b0247cce-fd4c-48d3-8a33-de064ce302fa" 00:12:48.896 ], 00:12:48.896 "product_name": "Malloc disk", 00:12:48.896 "block_size": 512, 00:12:48.896 "num_blocks": 65536, 00:12:48.896 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:48.896 "assigned_rate_limits": { 00:12:48.896 "rw_ios_per_sec": 0, 00:12:48.896 "rw_mbytes_per_sec": 0, 00:12:48.896 "r_mbytes_per_sec": 0, 00:12:48.896 "w_mbytes_per_sec": 0 00:12:48.896 }, 00:12:48.896 "claimed": false, 00:12:48.896 "zoned": false, 00:12:48.896 "supported_io_types": { 00:12:48.896 "read": true, 00:12:48.896 "write": true, 00:12:48.896 "unmap": true, 00:12:48.896 "flush": true, 00:12:48.896 "reset": true, 00:12:48.896 "nvme_admin": false, 00:12:48.896 "nvme_io": false, 00:12:48.896 "nvme_io_md": false, 00:12:48.896 "write_zeroes": true, 00:12:48.896 "zcopy": true, 00:12:48.897 "get_zone_info": false, 00:12:48.897 "zone_management": false, 00:12:48.897 "zone_append": false, 00:12:48.897 "compare": false, 00:12:48.897 "compare_and_write": false, 00:12:48.897 "abort": true, 00:12:48.897 "seek_hole": false, 00:12:48.897 "seek_data": false, 00:12:48.897 "copy": true, 00:12:48.897 "nvme_iov_md": false 00:12:48.897 }, 00:12:48.897 "memory_domains": [ 00:12:48.897 { 00:12:48.897 "dma_device_id": "system", 00:12:48.897 "dma_device_type": 1 00:12:48.897 }, 00:12:48.897 { 00:12:48.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.897 "dma_device_type": 2 00:12:48.897 } 00:12:48.897 ], 00:12:48.897 "driver_specific": {} 00:12:48.897 } 00:12:48.897 ] 00:12:48.897 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:48.897 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:48.897 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:48.897 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:49.156 BaseBdev4 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:49.156 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.415 06:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:49.415 [ 00:12:49.415 { 00:12:49.415 "name": "BaseBdev4", 00:12:49.415 "aliases": [ 00:12:49.415 "f700d754-0a32-44a6-b68c-568ceacba77a" 00:12:49.415 ], 00:12:49.415 "product_name": "Malloc disk", 00:12:49.415 "block_size": 512, 00:12:49.415 "num_blocks": 65536, 00:12:49.415 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:49.415 "assigned_rate_limits": { 00:12:49.415 "rw_ios_per_sec": 0, 00:12:49.415 "rw_mbytes_per_sec": 0, 00:12:49.415 "r_mbytes_per_sec": 0, 00:12:49.415 "w_mbytes_per_sec": 0 00:12:49.415 }, 00:12:49.415 "claimed": false, 00:12:49.415 "zoned": false, 00:12:49.415 "supported_io_types": { 00:12:49.415 "read": true, 00:12:49.415 "write": true, 00:12:49.415 "unmap": true, 00:12:49.415 "flush": true, 00:12:49.415 "reset": true, 00:12:49.415 "nvme_admin": false, 00:12:49.415 "nvme_io": false, 00:12:49.415 "nvme_io_md": false, 00:12:49.415 "write_zeroes": true, 00:12:49.415 "zcopy": true, 00:12:49.415 "get_zone_info": false, 00:12:49.415 "zone_management": false, 00:12:49.415 "zone_append": false, 00:12:49.415 "compare": false, 00:12:49.415 "compare_and_write": false, 00:12:49.415 "abort": true, 00:12:49.415 "seek_hole": false, 00:12:49.415 "seek_data": false, 00:12:49.415 "copy": true, 00:12:49.415 "nvme_iov_md": false 00:12:49.415 }, 00:12:49.415 "memory_domains": [ 00:12:49.415 { 00:12:49.415 "dma_device_id": "system", 00:12:49.415 "dma_device_type": 1 00:12:49.415 }, 00:12:49.415 { 00:12:49.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.415 "dma_device_type": 2 00:12:49.415 } 00:12:49.415 ], 00:12:49.415 "driver_specific": {} 00:12:49.415 } 00:12:49.415 ] 00:12:49.415 06:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:49.415 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:49.415 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:49.416 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:49.675 [2024-08-13 06:07:51.336205] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.675 [2024-08-13 06:07:51.336334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.675 [2024-08-13 06:07:51.336363] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.675 [2024-08-13 06:07:51.338210] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.675 [2024-08-13 06:07:51.338263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.675 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.934 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.934 "name": "Existed_Raid", 00:12:49.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.934 "strip_size_kb": 64, 00:12:49.934 "state": "configuring", 00:12:49.934 "raid_level": "raid0", 00:12:49.934 "superblock": false, 00:12:49.934 "num_base_bdevs": 4, 00:12:49.934 "num_base_bdevs_discovered": 3, 00:12:49.934 "num_base_bdevs_operational": 4, 00:12:49.934 "base_bdevs_list": [ 00:12:49.934 { 00:12:49.934 "name": "BaseBdev1", 00:12:49.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.934 "is_configured": false, 00:12:49.934 "data_offset": 0, 00:12:49.934 "data_size": 0 00:12:49.934 }, 00:12:49.934 { 00:12:49.934 "name": "BaseBdev2", 00:12:49.934 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:49.934 "is_configured": true, 00:12:49.934 "data_offset": 0, 00:12:49.935 "data_size": 65536 00:12:49.935 }, 00:12:49.935 { 00:12:49.935 "name": "BaseBdev3", 00:12:49.935 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:49.935 "is_configured": true, 00:12:49.935 "data_offset": 0, 00:12:49.935 "data_size": 65536 00:12:49.935 }, 00:12:49.935 { 00:12:49.935 "name": "BaseBdev4", 00:12:49.935 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:49.935 "is_configured": true, 00:12:49.935 "data_offset": 0, 00:12:49.935 "data_size": 65536 00:12:49.935 } 00:12:49.935 ] 00:12:49.935 }' 00:12:49.935 06:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.935 06:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.505 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:50.764 [2024-08-13 06:07:52.306458] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.764 "name": "Existed_Raid", 00:12:50.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.764 "strip_size_kb": 64, 00:12:50.764 "state": "configuring", 00:12:50.764 "raid_level": "raid0", 00:12:50.764 "superblock": false, 00:12:50.764 "num_base_bdevs": 4, 00:12:50.764 "num_base_bdevs_discovered": 2, 00:12:50.764 "num_base_bdevs_operational": 4, 00:12:50.764 "base_bdevs_list": [ 00:12:50.764 { 00:12:50.764 "name": "BaseBdev1", 00:12:50.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.764 "is_configured": false, 00:12:50.764 "data_offset": 0, 00:12:50.764 "data_size": 0 00:12:50.764 }, 00:12:50.764 { 00:12:50.764 "name": null, 00:12:50.764 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:50.764 "is_configured": false, 00:12:50.764 "data_offset": 0, 00:12:50.764 "data_size": 65536 00:12:50.764 }, 00:12:50.764 { 00:12:50.764 "name": "BaseBdev3", 00:12:50.764 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:50.764 "is_configured": true, 00:12:50.764 "data_offset": 0, 00:12:50.764 "data_size": 65536 00:12:50.764 }, 00:12:50.764 { 00:12:50.764 "name": "BaseBdev4", 00:12:50.764 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:50.764 "is_configured": true, 00:12:50.764 "data_offset": 0, 00:12:50.764 "data_size": 65536 00:12:50.764 } 00:12:50.764 ] 00:12:50.764 }' 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.764 06:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.333 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.333 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.592 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:51.592 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.851 [2024-08-13 06:07:53.479625] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.851 BaseBdev1 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:51.851 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.108 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.108 [ 00:12:52.108 { 00:12:52.108 "name": "BaseBdev1", 00:12:52.108 "aliases": [ 00:12:52.109 "5373185f-11ff-48cb-a488-c4bbd642b77a" 00:12:52.109 ], 00:12:52.109 "product_name": "Malloc disk", 00:12:52.109 "block_size": 512, 00:12:52.109 "num_blocks": 65536, 00:12:52.109 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:52.109 "assigned_rate_limits": { 00:12:52.109 "rw_ios_per_sec": 0, 00:12:52.109 "rw_mbytes_per_sec": 0, 00:12:52.109 "r_mbytes_per_sec": 0, 00:12:52.109 "w_mbytes_per_sec": 0 00:12:52.109 }, 00:12:52.109 "claimed": true, 00:12:52.109 "claim_type": "exclusive_write", 00:12:52.109 "zoned": false, 00:12:52.109 "supported_io_types": { 00:12:52.109 "read": true, 00:12:52.109 "write": true, 00:12:52.109 "unmap": true, 00:12:52.109 "flush": true, 00:12:52.109 "reset": true, 00:12:52.109 "nvme_admin": false, 00:12:52.109 "nvme_io": false, 00:12:52.109 "nvme_io_md": false, 00:12:52.109 "write_zeroes": true, 00:12:52.109 "zcopy": true, 00:12:52.109 "get_zone_info": false, 00:12:52.109 "zone_management": false, 00:12:52.109 "zone_append": false, 00:12:52.109 "compare": false, 00:12:52.109 "compare_and_write": false, 00:12:52.109 "abort": true, 00:12:52.109 "seek_hole": false, 00:12:52.109 "seek_data": false, 00:12:52.109 "copy": true, 00:12:52.109 "nvme_iov_md": false 00:12:52.109 }, 00:12:52.109 "memory_domains": [ 00:12:52.109 { 00:12:52.109 "dma_device_id": "system", 00:12:52.109 "dma_device_type": 1 00:12:52.109 }, 00:12:52.109 { 00:12:52.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.109 "dma_device_type": 2 00:12:52.109 } 00:12:52.109 ], 00:12:52.109 "driver_specific": {} 00:12:52.109 } 00:12:52.109 ] 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.109 06:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.367 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.367 "name": "Existed_Raid", 00:12:52.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.367 "strip_size_kb": 64, 00:12:52.367 "state": "configuring", 00:12:52.367 "raid_level": "raid0", 00:12:52.367 "superblock": false, 00:12:52.367 "num_base_bdevs": 4, 00:12:52.367 "num_base_bdevs_discovered": 3, 00:12:52.367 "num_base_bdevs_operational": 4, 00:12:52.367 "base_bdevs_list": [ 00:12:52.367 { 00:12:52.367 "name": "BaseBdev1", 00:12:52.367 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:52.367 "is_configured": true, 00:12:52.367 "data_offset": 0, 00:12:52.367 "data_size": 65536 00:12:52.367 }, 00:12:52.367 { 00:12:52.367 "name": null, 00:12:52.367 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:52.367 "is_configured": false, 00:12:52.367 "data_offset": 0, 00:12:52.367 "data_size": 65536 00:12:52.367 }, 00:12:52.367 { 00:12:52.367 "name": "BaseBdev3", 00:12:52.367 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:52.367 "is_configured": true, 00:12:52.367 "data_offset": 0, 00:12:52.367 "data_size": 65536 00:12:52.367 }, 00:12:52.367 { 00:12:52.367 "name": "BaseBdev4", 00:12:52.367 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:52.367 "is_configured": true, 00:12:52.367 "data_offset": 0, 00:12:52.367 "data_size": 65536 00:12:52.367 } 00:12:52.367 ] 00:12:52.367 }' 00:12:52.367 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.367 06:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.936 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.936 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.194 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:53.194 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:53.194 [2024-08-13 06:07:54.973165] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.453 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:53.453 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:53.453 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.454 06:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.454 06:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.454 "name": "Existed_Raid", 00:12:53.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.454 "strip_size_kb": 64, 00:12:53.454 "state": "configuring", 00:12:53.454 "raid_level": "raid0", 00:12:53.454 "superblock": false, 00:12:53.454 "num_base_bdevs": 4, 00:12:53.454 "num_base_bdevs_discovered": 2, 00:12:53.454 "num_base_bdevs_operational": 4, 00:12:53.454 "base_bdevs_list": [ 00:12:53.454 { 00:12:53.454 "name": "BaseBdev1", 00:12:53.454 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:53.454 "is_configured": true, 00:12:53.454 "data_offset": 0, 00:12:53.454 "data_size": 65536 00:12:53.454 }, 00:12:53.454 { 00:12:53.454 "name": null, 00:12:53.454 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:53.454 "is_configured": false, 00:12:53.454 "data_offset": 0, 00:12:53.454 "data_size": 65536 00:12:53.454 }, 00:12:53.454 { 00:12:53.454 "name": null, 00:12:53.454 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:53.454 "is_configured": false, 00:12:53.454 "data_offset": 0, 00:12:53.454 "data_size": 65536 00:12:53.454 }, 00:12:53.454 { 00:12:53.454 "name": "BaseBdev4", 00:12:53.454 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:53.454 "is_configured": true, 00:12:53.454 "data_offset": 0, 00:12:53.454 "data_size": 65536 00:12:53.454 } 00:12:53.454 ] 00:12:53.454 }' 00:12:53.454 06:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.454 06:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.022 06:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.022 06:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.281 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:54.281 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:54.541 [2024-08-13 06:07:56.179122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.541 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.800 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.800 "name": "Existed_Raid", 00:12:54.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.800 "strip_size_kb": 64, 00:12:54.800 "state": "configuring", 00:12:54.800 "raid_level": "raid0", 00:12:54.800 "superblock": false, 00:12:54.800 "num_base_bdevs": 4, 00:12:54.800 "num_base_bdevs_discovered": 3, 00:12:54.800 "num_base_bdevs_operational": 4, 00:12:54.800 "base_bdevs_list": [ 00:12:54.800 { 00:12:54.800 "name": "BaseBdev1", 00:12:54.801 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:54.801 "is_configured": true, 00:12:54.801 "data_offset": 0, 00:12:54.801 "data_size": 65536 00:12:54.801 }, 00:12:54.801 { 00:12:54.801 "name": null, 00:12:54.801 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:54.801 "is_configured": false, 00:12:54.801 "data_offset": 0, 00:12:54.801 "data_size": 65536 00:12:54.801 }, 00:12:54.801 { 00:12:54.801 "name": "BaseBdev3", 00:12:54.801 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:54.801 "is_configured": true, 00:12:54.801 "data_offset": 0, 00:12:54.801 "data_size": 65536 00:12:54.801 }, 00:12:54.801 { 00:12:54.801 "name": "BaseBdev4", 00:12:54.801 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:54.801 "is_configured": true, 00:12:54.801 "data_offset": 0, 00:12:54.801 "data_size": 65536 00:12:54.801 } 00:12:54.801 ] 00:12:54.801 }' 00:12:54.801 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.801 06:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.369 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.369 06:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:55.369 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:55.369 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:55.628 [2024-08-13 06:07:57.277213] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.628 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.887 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.887 "name": "Existed_Raid", 00:12:55.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.887 "strip_size_kb": 64, 00:12:55.888 "state": "configuring", 00:12:55.888 "raid_level": "raid0", 00:12:55.888 "superblock": false, 00:12:55.888 "num_base_bdevs": 4, 00:12:55.888 "num_base_bdevs_discovered": 2, 00:12:55.888 "num_base_bdevs_operational": 4, 00:12:55.888 "base_bdevs_list": [ 00:12:55.888 { 00:12:55.888 "name": null, 00:12:55.888 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:55.888 "is_configured": false, 00:12:55.888 "data_offset": 0, 00:12:55.888 "data_size": 65536 00:12:55.888 }, 00:12:55.888 { 00:12:55.888 "name": null, 00:12:55.888 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:55.888 "is_configured": false, 00:12:55.888 "data_offset": 0, 00:12:55.888 "data_size": 65536 00:12:55.888 }, 00:12:55.888 { 00:12:55.888 "name": "BaseBdev3", 00:12:55.888 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:55.888 "is_configured": true, 00:12:55.888 "data_offset": 0, 00:12:55.888 "data_size": 65536 00:12:55.888 }, 00:12:55.888 { 00:12:55.888 "name": "BaseBdev4", 00:12:55.888 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:55.888 "is_configured": true, 00:12:55.888 "data_offset": 0, 00:12:55.888 "data_size": 65536 00:12:55.888 } 00:12:55.888 ] 00:12:55.888 }' 00:12:55.888 06:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.888 06:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.456 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.456 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.456 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:56.456 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:56.715 [2024-08-13 06:07:58.365970] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.715 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.974 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.974 "name": "Existed_Raid", 00:12:56.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.974 "strip_size_kb": 64, 00:12:56.974 "state": "configuring", 00:12:56.974 "raid_level": "raid0", 00:12:56.974 "superblock": false, 00:12:56.974 "num_base_bdevs": 4, 00:12:56.975 "num_base_bdevs_discovered": 3, 00:12:56.975 "num_base_bdevs_operational": 4, 00:12:56.975 "base_bdevs_list": [ 00:12:56.975 { 00:12:56.975 "name": null, 00:12:56.975 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:56.975 "is_configured": false, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": "BaseBdev2", 00:12:56.975 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": "BaseBdev3", 00:12:56.975 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": "BaseBdev4", 00:12:56.975 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 } 00:12:56.975 ] 00:12:56.975 }' 00:12:56.975 06:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.975 06:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.543 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:57.543 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.802 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:57.802 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.802 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:57.802 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5373185f-11ff-48cb-a488-c4bbd642b77a 00:12:58.061 [2024-08-13 06:07:59.750703] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:58.061 [2024-08-13 06:07:59.750748] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:58.061 [2024-08-13 06:07:59.750756] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:58.061 [2024-08-13 06:07:59.750985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:58.061 [2024-08-13 06:07:59.751115] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:58.061 [2024-08-13 06:07:59.751124] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:58.061 [2024-08-13 06:07:59.751302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.061 NewBaseBdev 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:58.061 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:58.320 06:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:58.320 [ 00:12:58.320 { 00:12:58.320 "name": "NewBaseBdev", 00:12:58.320 "aliases": [ 00:12:58.320 "5373185f-11ff-48cb-a488-c4bbd642b77a" 00:12:58.320 ], 00:12:58.320 "product_name": "Malloc disk", 00:12:58.320 "block_size": 512, 00:12:58.320 "num_blocks": 65536, 00:12:58.320 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:58.320 "assigned_rate_limits": { 00:12:58.320 "rw_ios_per_sec": 0, 00:12:58.320 "rw_mbytes_per_sec": 0, 00:12:58.320 "r_mbytes_per_sec": 0, 00:12:58.320 "w_mbytes_per_sec": 0 00:12:58.320 }, 00:12:58.320 "claimed": true, 00:12:58.320 "claim_type": "exclusive_write", 00:12:58.320 "zoned": false, 00:12:58.320 "supported_io_types": { 00:12:58.320 "read": true, 00:12:58.320 "write": true, 00:12:58.320 "unmap": true, 00:12:58.320 "flush": true, 00:12:58.320 "reset": true, 00:12:58.320 "nvme_admin": false, 00:12:58.320 "nvme_io": false, 00:12:58.320 "nvme_io_md": false, 00:12:58.320 "write_zeroes": true, 00:12:58.320 "zcopy": true, 00:12:58.320 "get_zone_info": false, 00:12:58.320 "zone_management": false, 00:12:58.320 "zone_append": false, 00:12:58.320 "compare": false, 00:12:58.320 "compare_and_write": false, 00:12:58.320 "abort": true, 00:12:58.320 "seek_hole": false, 00:12:58.320 "seek_data": false, 00:12:58.320 "copy": true, 00:12:58.320 "nvme_iov_md": false 00:12:58.320 }, 00:12:58.320 "memory_domains": [ 00:12:58.320 { 00:12:58.320 "dma_device_id": "system", 00:12:58.320 "dma_device_type": 1 00:12:58.320 }, 00:12:58.320 { 00:12:58.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.320 "dma_device_type": 2 00:12:58.320 } 00:12:58.320 ], 00:12:58.320 "driver_specific": {} 00:12:58.320 } 00:12:58.320 ] 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.320 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.579 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.579 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.579 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.579 "name": "Existed_Raid", 00:12:58.579 "uuid": "3f258b8e-6b1f-4a93-b28d-4774e71b93a8", 00:12:58.579 "strip_size_kb": 64, 00:12:58.579 "state": "online", 00:12:58.579 "raid_level": "raid0", 00:12:58.579 "superblock": false, 00:12:58.579 "num_base_bdevs": 4, 00:12:58.579 "num_base_bdevs_discovered": 4, 00:12:58.579 "num_base_bdevs_operational": 4, 00:12:58.579 "base_bdevs_list": [ 00:12:58.579 { 00:12:58.579 "name": "NewBaseBdev", 00:12:58.579 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:58.579 "is_configured": true, 00:12:58.579 "data_offset": 0, 00:12:58.579 "data_size": 65536 00:12:58.579 }, 00:12:58.579 { 00:12:58.579 "name": "BaseBdev2", 00:12:58.579 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:58.579 "is_configured": true, 00:12:58.579 "data_offset": 0, 00:12:58.579 "data_size": 65536 00:12:58.579 }, 00:12:58.579 { 00:12:58.579 "name": "BaseBdev3", 00:12:58.579 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:58.579 "is_configured": true, 00:12:58.579 "data_offset": 0, 00:12:58.579 "data_size": 65536 00:12:58.579 }, 00:12:58.579 { 00:12:58.579 "name": "BaseBdev4", 00:12:58.579 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:58.579 "is_configured": true, 00:12:58.579 "data_offset": 0, 00:12:58.579 "data_size": 65536 00:12:58.579 } 00:12:58.579 ] 00:12:58.579 }' 00:12:58.579 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.579 06:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:59.147 06:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:59.406 [2024-08-13 06:08:01.072964] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.406 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:59.406 "name": "Existed_Raid", 00:12:59.406 "aliases": [ 00:12:59.406 "3f258b8e-6b1f-4a93-b28d-4774e71b93a8" 00:12:59.406 ], 00:12:59.406 "product_name": "Raid Volume", 00:12:59.406 "block_size": 512, 00:12:59.406 "num_blocks": 262144, 00:12:59.406 "uuid": "3f258b8e-6b1f-4a93-b28d-4774e71b93a8", 00:12:59.406 "assigned_rate_limits": { 00:12:59.406 "rw_ios_per_sec": 0, 00:12:59.406 "rw_mbytes_per_sec": 0, 00:12:59.406 "r_mbytes_per_sec": 0, 00:12:59.406 "w_mbytes_per_sec": 0 00:12:59.406 }, 00:12:59.406 "claimed": false, 00:12:59.406 "zoned": false, 00:12:59.406 "supported_io_types": { 00:12:59.406 "read": true, 00:12:59.406 "write": true, 00:12:59.406 "unmap": true, 00:12:59.406 "flush": true, 00:12:59.406 "reset": true, 00:12:59.406 "nvme_admin": false, 00:12:59.406 "nvme_io": false, 00:12:59.406 "nvme_io_md": false, 00:12:59.406 "write_zeroes": true, 00:12:59.406 "zcopy": false, 00:12:59.406 "get_zone_info": false, 00:12:59.406 "zone_management": false, 00:12:59.406 "zone_append": false, 00:12:59.406 "compare": false, 00:12:59.406 "compare_and_write": false, 00:12:59.406 "abort": false, 00:12:59.406 "seek_hole": false, 00:12:59.406 "seek_data": false, 00:12:59.406 "copy": false, 00:12:59.406 "nvme_iov_md": false 00:12:59.406 }, 00:12:59.406 "memory_domains": [ 00:12:59.406 { 00:12:59.406 "dma_device_id": "system", 00:12:59.406 "dma_device_type": 1 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.406 "dma_device_type": 2 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "system", 00:12:59.406 "dma_device_type": 1 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.406 "dma_device_type": 2 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "system", 00:12:59.406 "dma_device_type": 1 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.406 "dma_device_type": 2 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "system", 00:12:59.406 "dma_device_type": 1 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.406 "dma_device_type": 2 00:12:59.406 } 00:12:59.406 ], 00:12:59.406 "driver_specific": { 00:12:59.406 "raid": { 00:12:59.406 "uuid": "3f258b8e-6b1f-4a93-b28d-4774e71b93a8", 00:12:59.406 "strip_size_kb": 64, 00:12:59.406 "state": "online", 00:12:59.406 "raid_level": "raid0", 00:12:59.406 "superblock": false, 00:12:59.406 "num_base_bdevs": 4, 00:12:59.406 "num_base_bdevs_discovered": 4, 00:12:59.406 "num_base_bdevs_operational": 4, 00:12:59.406 "base_bdevs_list": [ 00:12:59.406 { 00:12:59.406 "name": "NewBaseBdev", 00:12:59.406 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:59.406 "is_configured": true, 00:12:59.406 "data_offset": 0, 00:12:59.406 "data_size": 65536 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "name": "BaseBdev2", 00:12:59.406 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:12:59.406 "is_configured": true, 00:12:59.406 "data_offset": 0, 00:12:59.406 "data_size": 65536 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "name": "BaseBdev3", 00:12:59.406 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:12:59.406 "is_configured": true, 00:12:59.406 "data_offset": 0, 00:12:59.406 "data_size": 65536 00:12:59.406 }, 00:12:59.406 { 00:12:59.406 "name": "BaseBdev4", 00:12:59.406 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:12:59.406 "is_configured": true, 00:12:59.406 "data_offset": 0, 00:12:59.406 "data_size": 65536 00:12:59.406 } 00:12:59.406 ] 00:12:59.406 } 00:12:59.406 } 00:12:59.406 }' 00:12:59.406 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.406 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:59.406 BaseBdev2 00:12:59.406 BaseBdev3 00:12:59.406 BaseBdev4' 00:12:59.406 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:59.406 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:59.406 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:59.664 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:59.664 "name": "NewBaseBdev", 00:12:59.664 "aliases": [ 00:12:59.664 "5373185f-11ff-48cb-a488-c4bbd642b77a" 00:12:59.664 ], 00:12:59.664 "product_name": "Malloc disk", 00:12:59.664 "block_size": 512, 00:12:59.664 "num_blocks": 65536, 00:12:59.664 "uuid": "5373185f-11ff-48cb-a488-c4bbd642b77a", 00:12:59.664 "assigned_rate_limits": { 00:12:59.664 "rw_ios_per_sec": 0, 00:12:59.664 "rw_mbytes_per_sec": 0, 00:12:59.664 "r_mbytes_per_sec": 0, 00:12:59.664 "w_mbytes_per_sec": 0 00:12:59.664 }, 00:12:59.664 "claimed": true, 00:12:59.664 "claim_type": "exclusive_write", 00:12:59.664 "zoned": false, 00:12:59.664 "supported_io_types": { 00:12:59.664 "read": true, 00:12:59.664 "write": true, 00:12:59.664 "unmap": true, 00:12:59.664 "flush": true, 00:12:59.664 "reset": true, 00:12:59.664 "nvme_admin": false, 00:12:59.664 "nvme_io": false, 00:12:59.664 "nvme_io_md": false, 00:12:59.664 "write_zeroes": true, 00:12:59.664 "zcopy": true, 00:12:59.664 "get_zone_info": false, 00:12:59.664 "zone_management": false, 00:12:59.665 "zone_append": false, 00:12:59.665 "compare": false, 00:12:59.665 "compare_and_write": false, 00:12:59.665 "abort": true, 00:12:59.665 "seek_hole": false, 00:12:59.665 "seek_data": false, 00:12:59.665 "copy": true, 00:12:59.665 "nvme_iov_md": false 00:12:59.665 }, 00:12:59.665 "memory_domains": [ 00:12:59.665 { 00:12:59.665 "dma_device_id": "system", 00:12:59.665 "dma_device_type": 1 00:12:59.665 }, 00:12:59.665 { 00:12:59.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.665 "dma_device_type": 2 00:12:59.665 } 00:12:59.665 ], 00:12:59.665 "driver_specific": {} 00:12:59.665 }' 00:12:59.665 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.665 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.665 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:59.665 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.665 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:59.922 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:00.181 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:00.181 "name": "BaseBdev2", 00:13:00.181 "aliases": [ 00:13:00.181 "6eac8856-1900-46e8-97e2-2ad19096a9ea" 00:13:00.181 ], 00:13:00.181 "product_name": "Malloc disk", 00:13:00.181 "block_size": 512, 00:13:00.181 "num_blocks": 65536, 00:13:00.181 "uuid": "6eac8856-1900-46e8-97e2-2ad19096a9ea", 00:13:00.181 "assigned_rate_limits": { 00:13:00.181 "rw_ios_per_sec": 0, 00:13:00.181 "rw_mbytes_per_sec": 0, 00:13:00.181 "r_mbytes_per_sec": 0, 00:13:00.181 "w_mbytes_per_sec": 0 00:13:00.181 }, 00:13:00.181 "claimed": true, 00:13:00.181 "claim_type": "exclusive_write", 00:13:00.181 "zoned": false, 00:13:00.181 "supported_io_types": { 00:13:00.181 "read": true, 00:13:00.181 "write": true, 00:13:00.181 "unmap": true, 00:13:00.181 "flush": true, 00:13:00.181 "reset": true, 00:13:00.181 "nvme_admin": false, 00:13:00.181 "nvme_io": false, 00:13:00.181 "nvme_io_md": false, 00:13:00.181 "write_zeroes": true, 00:13:00.181 "zcopy": true, 00:13:00.181 "get_zone_info": false, 00:13:00.181 "zone_management": false, 00:13:00.181 "zone_append": false, 00:13:00.181 "compare": false, 00:13:00.181 "compare_and_write": false, 00:13:00.181 "abort": true, 00:13:00.181 "seek_hole": false, 00:13:00.181 "seek_data": false, 00:13:00.181 "copy": true, 00:13:00.181 "nvme_iov_md": false 00:13:00.181 }, 00:13:00.181 "memory_domains": [ 00:13:00.181 { 00:13:00.181 "dma_device_id": "system", 00:13:00.181 "dma_device_type": 1 00:13:00.181 }, 00:13:00.181 { 00:13:00.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.181 "dma_device_type": 2 00:13:00.181 } 00:13:00.181 ], 00:13:00.181 "driver_specific": {} 00:13:00.181 }' 00:13:00.181 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:00.182 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:00.182 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:00.182 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:00.440 06:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:00.440 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:00.698 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:00.698 "name": "BaseBdev3", 00:13:00.698 "aliases": [ 00:13:00.698 "b0247cce-fd4c-48d3-8a33-de064ce302fa" 00:13:00.698 ], 00:13:00.698 "product_name": "Malloc disk", 00:13:00.698 "block_size": 512, 00:13:00.698 "num_blocks": 65536, 00:13:00.698 "uuid": "b0247cce-fd4c-48d3-8a33-de064ce302fa", 00:13:00.698 "assigned_rate_limits": { 00:13:00.698 "rw_ios_per_sec": 0, 00:13:00.698 "rw_mbytes_per_sec": 0, 00:13:00.698 "r_mbytes_per_sec": 0, 00:13:00.698 "w_mbytes_per_sec": 0 00:13:00.698 }, 00:13:00.698 "claimed": true, 00:13:00.698 "claim_type": "exclusive_write", 00:13:00.698 "zoned": false, 00:13:00.698 "supported_io_types": { 00:13:00.698 "read": true, 00:13:00.698 "write": true, 00:13:00.698 "unmap": true, 00:13:00.698 "flush": true, 00:13:00.698 "reset": true, 00:13:00.698 "nvme_admin": false, 00:13:00.698 "nvme_io": false, 00:13:00.698 "nvme_io_md": false, 00:13:00.698 "write_zeroes": true, 00:13:00.698 "zcopy": true, 00:13:00.698 "get_zone_info": false, 00:13:00.698 "zone_management": false, 00:13:00.698 "zone_append": false, 00:13:00.698 "compare": false, 00:13:00.698 "compare_and_write": false, 00:13:00.698 "abort": true, 00:13:00.698 "seek_hole": false, 00:13:00.698 "seek_data": false, 00:13:00.698 "copy": true, 00:13:00.699 "nvme_iov_md": false 00:13:00.699 }, 00:13:00.699 "memory_domains": [ 00:13:00.699 { 00:13:00.699 "dma_device_id": "system", 00:13:00.699 "dma_device_type": 1 00:13:00.699 }, 00:13:00.699 { 00:13:00.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.699 "dma_device_type": 2 00:13:00.699 } 00:13:00.699 ], 00:13:00.699 "driver_specific": {} 00:13:00.699 }' 00:13:00.699 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:00.699 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:00.699 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:00.699 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:00.957 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:01.216 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:01.216 "name": "BaseBdev4", 00:13:01.216 "aliases": [ 00:13:01.216 "f700d754-0a32-44a6-b68c-568ceacba77a" 00:13:01.216 ], 00:13:01.216 "product_name": "Malloc disk", 00:13:01.216 "block_size": 512, 00:13:01.216 "num_blocks": 65536, 00:13:01.216 "uuid": "f700d754-0a32-44a6-b68c-568ceacba77a", 00:13:01.216 "assigned_rate_limits": { 00:13:01.216 "rw_ios_per_sec": 0, 00:13:01.216 "rw_mbytes_per_sec": 0, 00:13:01.216 "r_mbytes_per_sec": 0, 00:13:01.216 "w_mbytes_per_sec": 0 00:13:01.216 }, 00:13:01.216 "claimed": true, 00:13:01.216 "claim_type": "exclusive_write", 00:13:01.216 "zoned": false, 00:13:01.216 "supported_io_types": { 00:13:01.216 "read": true, 00:13:01.216 "write": true, 00:13:01.216 "unmap": true, 00:13:01.216 "flush": true, 00:13:01.216 "reset": true, 00:13:01.216 "nvme_admin": false, 00:13:01.216 "nvme_io": false, 00:13:01.216 "nvme_io_md": false, 00:13:01.216 "write_zeroes": true, 00:13:01.216 "zcopy": true, 00:13:01.216 "get_zone_info": false, 00:13:01.216 "zone_management": false, 00:13:01.216 "zone_append": false, 00:13:01.216 "compare": false, 00:13:01.216 "compare_and_write": false, 00:13:01.216 "abort": true, 00:13:01.216 "seek_hole": false, 00:13:01.216 "seek_data": false, 00:13:01.216 "copy": true, 00:13:01.216 "nvme_iov_md": false 00:13:01.216 }, 00:13:01.216 "memory_domains": [ 00:13:01.216 { 00:13:01.216 "dma_device_id": "system", 00:13:01.216 "dma_device_type": 1 00:13:01.216 }, 00:13:01.216 { 00:13:01.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.216 "dma_device_type": 2 00:13:01.216 } 00:13:01.216 ], 00:13:01.216 "driver_specific": {} 00:13:01.216 }' 00:13:01.216 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:01.216 06:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:01.475 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:01.734 [2024-08-13 06:08:03.408660] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.734 [2024-08-13 06:08:03.408687] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.734 [2024-08-13 06:08:03.408757] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.735 [2024-08-13 06:08:03.408820] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.735 [2024-08-13 06:08:03.408839] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 83164 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 83164 ']' 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 83164 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83164 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:01.735 killing process with pid 83164 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83164' 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 83164 00:13:01.735 [2024-08-13 06:08:03.480293] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.735 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 83164 00:13:01.735 [2024-08-13 06:08:03.521783] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.994 06:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:01.994 00:13:01.994 real 0m28.018s 00:13:01.994 user 0m51.689s 00:13:01.994 sys 0m4.712s 00:13:01.994 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.994 ************************************ 00:13:01.994 END TEST raid_state_function_test 00:13:01.994 ************************************ 00:13:01.994 06:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.254 06:08:03 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:02.255 06:08:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:02.255 06:08:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.255 06:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.255 ************************************ 00:13:02.255 START TEST raid_state_function_test_sb 00:13:02.255 ************************************ 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:02.255 Process raid pid: 84171 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=84171 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 84171' 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 84171 /var/tmp/spdk-raid.sock 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 84171 ']' 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:02.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.255 06:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.255 [2024-08-13 06:08:03.933814] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:13:02.255 [2024-08-13 06:08:03.934077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.524 [2024-08-13 06:08:04.080776] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.524 [2024-08-13 06:08:04.128794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.524 [2024-08-13 06:08:04.172067] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.524 [2024-08-13 06:08:04.172097] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.132 06:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.132 06:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:03.132 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:03.132 [2024-08-13 06:08:04.916150] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.132 [2024-08-13 06:08:04.916205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.132 [2024-08-13 06:08:04.916219] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.132 [2024-08-13 06:08:04.916227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.132 [2024-08-13 06:08:04.916237] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:03.132 [2024-08-13 06:08:04.916244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:03.132 [2024-08-13 06:08:04.916253] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:03.132 [2024-08-13 06:08:04.916260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.409 06:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.409 06:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.409 "name": "Existed_Raid", 00:13:03.409 "uuid": "efec7cfa-97e9-43dc-8d21-f16f182f00ab", 00:13:03.409 "strip_size_kb": 64, 00:13:03.409 "state": "configuring", 00:13:03.409 "raid_level": "raid0", 00:13:03.409 "superblock": true, 00:13:03.409 "num_base_bdevs": 4, 00:13:03.409 "num_base_bdevs_discovered": 0, 00:13:03.409 "num_base_bdevs_operational": 4, 00:13:03.409 "base_bdevs_list": [ 00:13:03.409 { 00:13:03.409 "name": "BaseBdev1", 00:13:03.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.409 "is_configured": false, 00:13:03.409 "data_offset": 0, 00:13:03.409 "data_size": 0 00:13:03.409 }, 00:13:03.409 { 00:13:03.409 "name": "BaseBdev2", 00:13:03.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.409 "is_configured": false, 00:13:03.409 "data_offset": 0, 00:13:03.409 "data_size": 0 00:13:03.409 }, 00:13:03.409 { 00:13:03.409 "name": "BaseBdev3", 00:13:03.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.409 "is_configured": false, 00:13:03.409 "data_offset": 0, 00:13:03.409 "data_size": 0 00:13:03.409 }, 00:13:03.409 { 00:13:03.409 "name": "BaseBdev4", 00:13:03.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.409 "is_configured": false, 00:13:03.409 "data_offset": 0, 00:13:03.409 "data_size": 0 00:13:03.409 } 00:13:03.409 ] 00:13:03.409 }' 00:13:03.409 06:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.409 06:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.977 06:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:04.236 [2024-08-13 06:08:05.838355] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.236 [2024-08-13 06:08:05.838447] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:04.236 06:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:04.496 [2024-08-13 06:08:06.054095] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:04.496 [2024-08-13 06:08:06.054208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:04.496 [2024-08-13 06:08:06.054238] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:04.496 [2024-08-13 06:08:06.054267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:04.496 [2024-08-13 06:08:06.054285] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:04.496 [2024-08-13 06:08:06.054302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:04.496 [2024-08-13 06:08:06.054336] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:04.496 [2024-08-13 06:08:06.054354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.496 [2024-08-13 06:08:06.250494] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.496 BaseBdev1 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:04.496 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.754 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:05.013 [ 00:13:05.013 { 00:13:05.013 "name": "BaseBdev1", 00:13:05.013 "aliases": [ 00:13:05.013 "fb8dc91f-d348-44a9-b5cc-11659b188cf3" 00:13:05.013 ], 00:13:05.013 "product_name": "Malloc disk", 00:13:05.013 "block_size": 512, 00:13:05.013 "num_blocks": 65536, 00:13:05.013 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:05.013 "assigned_rate_limits": { 00:13:05.013 "rw_ios_per_sec": 0, 00:13:05.013 "rw_mbytes_per_sec": 0, 00:13:05.013 "r_mbytes_per_sec": 0, 00:13:05.013 "w_mbytes_per_sec": 0 00:13:05.013 }, 00:13:05.013 "claimed": true, 00:13:05.013 "claim_type": "exclusive_write", 00:13:05.013 "zoned": false, 00:13:05.013 "supported_io_types": { 00:13:05.013 "read": true, 00:13:05.013 "write": true, 00:13:05.013 "unmap": true, 00:13:05.013 "flush": true, 00:13:05.013 "reset": true, 00:13:05.013 "nvme_admin": false, 00:13:05.013 "nvme_io": false, 00:13:05.013 "nvme_io_md": false, 00:13:05.013 "write_zeroes": true, 00:13:05.013 "zcopy": true, 00:13:05.013 "get_zone_info": false, 00:13:05.013 "zone_management": false, 00:13:05.013 "zone_append": false, 00:13:05.013 "compare": false, 00:13:05.013 "compare_and_write": false, 00:13:05.013 "abort": true, 00:13:05.013 "seek_hole": false, 00:13:05.013 "seek_data": false, 00:13:05.013 "copy": true, 00:13:05.013 "nvme_iov_md": false 00:13:05.013 }, 00:13:05.013 "memory_domains": [ 00:13:05.013 { 00:13:05.013 "dma_device_id": "system", 00:13:05.013 "dma_device_type": 1 00:13:05.013 }, 00:13:05.013 { 00:13:05.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.013 "dma_device_type": 2 00:13:05.013 } 00:13:05.013 ], 00:13:05.013 "driver_specific": {} 00:13:05.013 } 00:13:05.013 ] 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.013 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.273 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.273 "name": "Existed_Raid", 00:13:05.273 "uuid": "5a4cd127-7a72-4fba-b252-deb1a5edfb2c", 00:13:05.273 "strip_size_kb": 64, 00:13:05.273 "state": "configuring", 00:13:05.273 "raid_level": "raid0", 00:13:05.273 "superblock": true, 00:13:05.273 "num_base_bdevs": 4, 00:13:05.273 "num_base_bdevs_discovered": 1, 00:13:05.273 "num_base_bdevs_operational": 4, 00:13:05.273 "base_bdevs_list": [ 00:13:05.273 { 00:13:05.273 "name": "BaseBdev1", 00:13:05.273 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:05.273 "is_configured": true, 00:13:05.273 "data_offset": 2048, 00:13:05.273 "data_size": 63488 00:13:05.273 }, 00:13:05.273 { 00:13:05.273 "name": "BaseBdev2", 00:13:05.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.273 "is_configured": false, 00:13:05.273 "data_offset": 0, 00:13:05.273 "data_size": 0 00:13:05.273 }, 00:13:05.273 { 00:13:05.273 "name": "BaseBdev3", 00:13:05.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.273 "is_configured": false, 00:13:05.273 "data_offset": 0, 00:13:05.273 "data_size": 0 00:13:05.273 }, 00:13:05.273 { 00:13:05.273 "name": "BaseBdev4", 00:13:05.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.273 "is_configured": false, 00:13:05.273 "data_offset": 0, 00:13:05.273 "data_size": 0 00:13:05.273 } 00:13:05.273 ] 00:13:05.273 }' 00:13:05.273 06:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.273 06:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.841 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:05.841 [2024-08-13 06:08:07.576315] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:05.841 [2024-08-13 06:08:07.576369] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:05.841 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:06.100 [2024-08-13 06:08:07.764081] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.100 [2024-08-13 06:08:07.765861] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.100 [2024-08-13 06:08:07.765897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.100 [2024-08-13 06:08:07.765907] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.100 [2024-08-13 06:08:07.765918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.100 [2024-08-13 06:08:07.765926] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:06.100 [2024-08-13 06:08:07.765933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.100 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:06.101 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.101 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.101 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.101 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.101 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.101 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.360 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.360 "name": "Existed_Raid", 00:13:06.360 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:06.360 "strip_size_kb": 64, 00:13:06.360 "state": "configuring", 00:13:06.360 "raid_level": "raid0", 00:13:06.360 "superblock": true, 00:13:06.360 "num_base_bdevs": 4, 00:13:06.360 "num_base_bdevs_discovered": 1, 00:13:06.360 "num_base_bdevs_operational": 4, 00:13:06.360 "base_bdevs_list": [ 00:13:06.360 { 00:13:06.360 "name": "BaseBdev1", 00:13:06.360 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:06.360 "is_configured": true, 00:13:06.360 "data_offset": 2048, 00:13:06.360 "data_size": 63488 00:13:06.360 }, 00:13:06.360 { 00:13:06.360 "name": "BaseBdev2", 00:13:06.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.360 "is_configured": false, 00:13:06.360 "data_offset": 0, 00:13:06.360 "data_size": 0 00:13:06.360 }, 00:13:06.360 { 00:13:06.360 "name": "BaseBdev3", 00:13:06.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.360 "is_configured": false, 00:13:06.360 "data_offset": 0, 00:13:06.360 "data_size": 0 00:13:06.360 }, 00:13:06.360 { 00:13:06.360 "name": "BaseBdev4", 00:13:06.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.360 "is_configured": false, 00:13:06.360 "data_offset": 0, 00:13:06.360 "data_size": 0 00:13:06.360 } 00:13:06.360 ] 00:13:06.360 }' 00:13:06.360 06:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.360 06:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:06.928 [2024-08-13 06:08:08.673716] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.928 BaseBdev2 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:06.928 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.187 06:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:07.446 [ 00:13:07.446 { 00:13:07.446 "name": "BaseBdev2", 00:13:07.446 "aliases": [ 00:13:07.446 "7e873591-accb-4f42-bcbe-40faf05c8d02" 00:13:07.446 ], 00:13:07.446 "product_name": "Malloc disk", 00:13:07.446 "block_size": 512, 00:13:07.446 "num_blocks": 65536, 00:13:07.446 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:07.446 "assigned_rate_limits": { 00:13:07.446 "rw_ios_per_sec": 0, 00:13:07.446 "rw_mbytes_per_sec": 0, 00:13:07.446 "r_mbytes_per_sec": 0, 00:13:07.446 "w_mbytes_per_sec": 0 00:13:07.446 }, 00:13:07.446 "claimed": true, 00:13:07.446 "claim_type": "exclusive_write", 00:13:07.446 "zoned": false, 00:13:07.446 "supported_io_types": { 00:13:07.446 "read": true, 00:13:07.446 "write": true, 00:13:07.446 "unmap": true, 00:13:07.446 "flush": true, 00:13:07.446 "reset": true, 00:13:07.446 "nvme_admin": false, 00:13:07.446 "nvme_io": false, 00:13:07.446 "nvme_io_md": false, 00:13:07.446 "write_zeroes": true, 00:13:07.446 "zcopy": true, 00:13:07.446 "get_zone_info": false, 00:13:07.446 "zone_management": false, 00:13:07.446 "zone_append": false, 00:13:07.446 "compare": false, 00:13:07.446 "compare_and_write": false, 00:13:07.446 "abort": true, 00:13:07.446 "seek_hole": false, 00:13:07.446 "seek_data": false, 00:13:07.446 "copy": true, 00:13:07.446 "nvme_iov_md": false 00:13:07.446 }, 00:13:07.446 "memory_domains": [ 00:13:07.446 { 00:13:07.446 "dma_device_id": "system", 00:13:07.446 "dma_device_type": 1 00:13:07.446 }, 00:13:07.446 { 00:13:07.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.446 "dma_device_type": 2 00:13:07.446 } 00:13:07.446 ], 00:13:07.446 "driver_specific": {} 00:13:07.446 } 00:13:07.446 ] 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.446 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.705 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.705 "name": "Existed_Raid", 00:13:07.705 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:07.705 "strip_size_kb": 64, 00:13:07.705 "state": "configuring", 00:13:07.705 "raid_level": "raid0", 00:13:07.705 "superblock": true, 00:13:07.705 "num_base_bdevs": 4, 00:13:07.705 "num_base_bdevs_discovered": 2, 00:13:07.705 "num_base_bdevs_operational": 4, 00:13:07.705 "base_bdevs_list": [ 00:13:07.705 { 00:13:07.705 "name": "BaseBdev1", 00:13:07.705 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:07.705 "is_configured": true, 00:13:07.705 "data_offset": 2048, 00:13:07.705 "data_size": 63488 00:13:07.705 }, 00:13:07.705 { 00:13:07.705 "name": "BaseBdev2", 00:13:07.705 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:07.705 "is_configured": true, 00:13:07.705 "data_offset": 2048, 00:13:07.705 "data_size": 63488 00:13:07.705 }, 00:13:07.705 { 00:13:07.705 "name": "BaseBdev3", 00:13:07.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.705 "is_configured": false, 00:13:07.705 "data_offset": 0, 00:13:07.705 "data_size": 0 00:13:07.705 }, 00:13:07.705 { 00:13:07.705 "name": "BaseBdev4", 00:13:07.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.705 "is_configured": false, 00:13:07.705 "data_offset": 0, 00:13:07.705 "data_size": 0 00:13:07.705 } 00:13:07.705 ] 00:13:07.705 }' 00:13:07.705 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.705 06:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.274 06:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:08.274 [2024-08-13 06:08:10.022367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.274 BaseBdev3 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:08.274 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:08.533 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:08.792 [ 00:13:08.792 { 00:13:08.792 "name": "BaseBdev3", 00:13:08.792 "aliases": [ 00:13:08.792 "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca" 00:13:08.792 ], 00:13:08.792 "product_name": "Malloc disk", 00:13:08.792 "block_size": 512, 00:13:08.792 "num_blocks": 65536, 00:13:08.792 "uuid": "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca", 00:13:08.792 "assigned_rate_limits": { 00:13:08.792 "rw_ios_per_sec": 0, 00:13:08.792 "rw_mbytes_per_sec": 0, 00:13:08.792 "r_mbytes_per_sec": 0, 00:13:08.792 "w_mbytes_per_sec": 0 00:13:08.792 }, 00:13:08.792 "claimed": true, 00:13:08.792 "claim_type": "exclusive_write", 00:13:08.792 "zoned": false, 00:13:08.792 "supported_io_types": { 00:13:08.792 "read": true, 00:13:08.792 "write": true, 00:13:08.792 "unmap": true, 00:13:08.792 "flush": true, 00:13:08.792 "reset": true, 00:13:08.792 "nvme_admin": false, 00:13:08.792 "nvme_io": false, 00:13:08.792 "nvme_io_md": false, 00:13:08.792 "write_zeroes": true, 00:13:08.792 "zcopy": true, 00:13:08.792 "get_zone_info": false, 00:13:08.792 "zone_management": false, 00:13:08.792 "zone_append": false, 00:13:08.792 "compare": false, 00:13:08.792 "compare_and_write": false, 00:13:08.792 "abort": true, 00:13:08.792 "seek_hole": false, 00:13:08.792 "seek_data": false, 00:13:08.792 "copy": true, 00:13:08.792 "nvme_iov_md": false 00:13:08.792 }, 00:13:08.792 "memory_domains": [ 00:13:08.792 { 00:13:08.792 "dma_device_id": "system", 00:13:08.792 "dma_device_type": 1 00:13:08.792 }, 00:13:08.792 { 00:13:08.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.792 "dma_device_type": 2 00:13:08.792 } 00:13:08.792 ], 00:13:08.792 "driver_specific": {} 00:13:08.792 } 00:13:08.792 ] 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.792 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.051 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.051 "name": "Existed_Raid", 00:13:09.051 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:09.051 "strip_size_kb": 64, 00:13:09.051 "state": "configuring", 00:13:09.051 "raid_level": "raid0", 00:13:09.051 "superblock": true, 00:13:09.051 "num_base_bdevs": 4, 00:13:09.051 "num_base_bdevs_discovered": 3, 00:13:09.051 "num_base_bdevs_operational": 4, 00:13:09.051 "base_bdevs_list": [ 00:13:09.051 { 00:13:09.051 "name": "BaseBdev1", 00:13:09.051 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:09.051 "is_configured": true, 00:13:09.051 "data_offset": 2048, 00:13:09.051 "data_size": 63488 00:13:09.051 }, 00:13:09.051 { 00:13:09.051 "name": "BaseBdev2", 00:13:09.051 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:09.051 "is_configured": true, 00:13:09.051 "data_offset": 2048, 00:13:09.051 "data_size": 63488 00:13:09.051 }, 00:13:09.051 { 00:13:09.051 "name": "BaseBdev3", 00:13:09.051 "uuid": "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca", 00:13:09.051 "is_configured": true, 00:13:09.051 "data_offset": 2048, 00:13:09.051 "data_size": 63488 00:13:09.051 }, 00:13:09.051 { 00:13:09.051 "name": "BaseBdev4", 00:13:09.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.051 "is_configured": false, 00:13:09.051 "data_offset": 0, 00:13:09.051 "data_size": 0 00:13:09.051 } 00:13:09.051 ] 00:13:09.051 }' 00:13:09.051 06:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.051 06:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:09.620 [2024-08-13 06:08:11.363171] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.620 [2024-08-13 06:08:11.363429] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:09.620 [2024-08-13 06:08:11.363470] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:09.620 [2024-08-13 06:08:11.363750] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:09.620 [2024-08-13 06:08:11.363915] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:09.620 [2024-08-13 06:08:11.363962] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:09.620 [2024-08-13 06:08:11.364107] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.620 BaseBdev4 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:09.620 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.880 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:10.139 [ 00:13:10.139 { 00:13:10.139 "name": "BaseBdev4", 00:13:10.139 "aliases": [ 00:13:10.139 "5c1c4feb-1826-4904-a80e-c36915d11ae7" 00:13:10.139 ], 00:13:10.139 "product_name": "Malloc disk", 00:13:10.139 "block_size": 512, 00:13:10.139 "num_blocks": 65536, 00:13:10.139 "uuid": "5c1c4feb-1826-4904-a80e-c36915d11ae7", 00:13:10.139 "assigned_rate_limits": { 00:13:10.139 "rw_ios_per_sec": 0, 00:13:10.139 "rw_mbytes_per_sec": 0, 00:13:10.139 "r_mbytes_per_sec": 0, 00:13:10.139 "w_mbytes_per_sec": 0 00:13:10.139 }, 00:13:10.139 "claimed": true, 00:13:10.139 "claim_type": "exclusive_write", 00:13:10.139 "zoned": false, 00:13:10.139 "supported_io_types": { 00:13:10.139 "read": true, 00:13:10.139 "write": true, 00:13:10.139 "unmap": true, 00:13:10.139 "flush": true, 00:13:10.139 "reset": true, 00:13:10.139 "nvme_admin": false, 00:13:10.139 "nvme_io": false, 00:13:10.139 "nvme_io_md": false, 00:13:10.139 "write_zeroes": true, 00:13:10.139 "zcopy": true, 00:13:10.139 "get_zone_info": false, 00:13:10.139 "zone_management": false, 00:13:10.139 "zone_append": false, 00:13:10.139 "compare": false, 00:13:10.139 "compare_and_write": false, 00:13:10.139 "abort": true, 00:13:10.139 "seek_hole": false, 00:13:10.139 "seek_data": false, 00:13:10.139 "copy": true, 00:13:10.139 "nvme_iov_md": false 00:13:10.139 }, 00:13:10.139 "memory_domains": [ 00:13:10.139 { 00:13:10.139 "dma_device_id": "system", 00:13:10.139 "dma_device_type": 1 00:13:10.139 }, 00:13:10.139 { 00:13:10.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.139 "dma_device_type": 2 00:13:10.139 } 00:13:10.139 ], 00:13:10.139 "driver_specific": {} 00:13:10.139 } 00:13:10.139 ] 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.139 06:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.399 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.399 "name": "Existed_Raid", 00:13:10.399 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:10.399 "strip_size_kb": 64, 00:13:10.399 "state": "online", 00:13:10.399 "raid_level": "raid0", 00:13:10.399 "superblock": true, 00:13:10.399 "num_base_bdevs": 4, 00:13:10.399 "num_base_bdevs_discovered": 4, 00:13:10.399 "num_base_bdevs_operational": 4, 00:13:10.399 "base_bdevs_list": [ 00:13:10.399 { 00:13:10.399 "name": "BaseBdev1", 00:13:10.399 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:10.399 "is_configured": true, 00:13:10.399 "data_offset": 2048, 00:13:10.399 "data_size": 63488 00:13:10.399 }, 00:13:10.399 { 00:13:10.399 "name": "BaseBdev2", 00:13:10.399 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:10.399 "is_configured": true, 00:13:10.399 "data_offset": 2048, 00:13:10.399 "data_size": 63488 00:13:10.399 }, 00:13:10.399 { 00:13:10.399 "name": "BaseBdev3", 00:13:10.399 "uuid": "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca", 00:13:10.399 "is_configured": true, 00:13:10.399 "data_offset": 2048, 00:13:10.399 "data_size": 63488 00:13:10.399 }, 00:13:10.399 { 00:13:10.399 "name": "BaseBdev4", 00:13:10.399 "uuid": "5c1c4feb-1826-4904-a80e-c36915d11ae7", 00:13:10.399 "is_configured": true, 00:13:10.399 "data_offset": 2048, 00:13:10.399 "data_size": 63488 00:13:10.399 } 00:13:10.399 ] 00:13:10.399 }' 00:13:10.399 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.399 06:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:10.967 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:10.968 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:10.968 [2024-08-13 06:08:12.729343] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.227 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:11.227 "name": "Existed_Raid", 00:13:11.227 "aliases": [ 00:13:11.227 "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7" 00:13:11.227 ], 00:13:11.227 "product_name": "Raid Volume", 00:13:11.227 "block_size": 512, 00:13:11.227 "num_blocks": 253952, 00:13:11.227 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:11.227 "assigned_rate_limits": { 00:13:11.227 "rw_ios_per_sec": 0, 00:13:11.227 "rw_mbytes_per_sec": 0, 00:13:11.227 "r_mbytes_per_sec": 0, 00:13:11.227 "w_mbytes_per_sec": 0 00:13:11.227 }, 00:13:11.227 "claimed": false, 00:13:11.227 "zoned": false, 00:13:11.227 "supported_io_types": { 00:13:11.227 "read": true, 00:13:11.227 "write": true, 00:13:11.227 "unmap": true, 00:13:11.227 "flush": true, 00:13:11.227 "reset": true, 00:13:11.227 "nvme_admin": false, 00:13:11.227 "nvme_io": false, 00:13:11.227 "nvme_io_md": false, 00:13:11.227 "write_zeroes": true, 00:13:11.227 "zcopy": false, 00:13:11.227 "get_zone_info": false, 00:13:11.227 "zone_management": false, 00:13:11.227 "zone_append": false, 00:13:11.227 "compare": false, 00:13:11.227 "compare_and_write": false, 00:13:11.227 "abort": false, 00:13:11.227 "seek_hole": false, 00:13:11.227 "seek_data": false, 00:13:11.227 "copy": false, 00:13:11.227 "nvme_iov_md": false 00:13:11.227 }, 00:13:11.227 "memory_domains": [ 00:13:11.227 { 00:13:11.227 "dma_device_id": "system", 00:13:11.227 "dma_device_type": 1 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.227 "dma_device_type": 2 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "system", 00:13:11.227 "dma_device_type": 1 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.227 "dma_device_type": 2 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "system", 00:13:11.227 "dma_device_type": 1 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.227 "dma_device_type": 2 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "system", 00:13:11.227 "dma_device_type": 1 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.227 "dma_device_type": 2 00:13:11.227 } 00:13:11.227 ], 00:13:11.227 "driver_specific": { 00:13:11.227 "raid": { 00:13:11.227 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:11.227 "strip_size_kb": 64, 00:13:11.227 "state": "online", 00:13:11.227 "raid_level": "raid0", 00:13:11.227 "superblock": true, 00:13:11.227 "num_base_bdevs": 4, 00:13:11.227 "num_base_bdevs_discovered": 4, 00:13:11.227 "num_base_bdevs_operational": 4, 00:13:11.227 "base_bdevs_list": [ 00:13:11.227 { 00:13:11.227 "name": "BaseBdev1", 00:13:11.227 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:11.227 "is_configured": true, 00:13:11.227 "data_offset": 2048, 00:13:11.227 "data_size": 63488 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "name": "BaseBdev2", 00:13:11.227 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:11.227 "is_configured": true, 00:13:11.227 "data_offset": 2048, 00:13:11.227 "data_size": 63488 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "name": "BaseBdev3", 00:13:11.227 "uuid": "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca", 00:13:11.227 "is_configured": true, 00:13:11.227 "data_offset": 2048, 00:13:11.227 "data_size": 63488 00:13:11.227 }, 00:13:11.227 { 00:13:11.227 "name": "BaseBdev4", 00:13:11.227 "uuid": "5c1c4feb-1826-4904-a80e-c36915d11ae7", 00:13:11.227 "is_configured": true, 00:13:11.227 "data_offset": 2048, 00:13:11.227 "data_size": 63488 00:13:11.228 } 00:13:11.228 ] 00:13:11.228 } 00:13:11.228 } 00:13:11.228 }' 00:13:11.228 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.228 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:11.228 BaseBdev2 00:13:11.228 BaseBdev3 00:13:11.228 BaseBdev4' 00:13:11.228 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:11.228 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:11.228 06:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.487 "name": "BaseBdev1", 00:13:11.487 "aliases": [ 00:13:11.487 "fb8dc91f-d348-44a9-b5cc-11659b188cf3" 00:13:11.487 ], 00:13:11.487 "product_name": "Malloc disk", 00:13:11.487 "block_size": 512, 00:13:11.487 "num_blocks": 65536, 00:13:11.487 "uuid": "fb8dc91f-d348-44a9-b5cc-11659b188cf3", 00:13:11.487 "assigned_rate_limits": { 00:13:11.487 "rw_ios_per_sec": 0, 00:13:11.487 "rw_mbytes_per_sec": 0, 00:13:11.487 "r_mbytes_per_sec": 0, 00:13:11.487 "w_mbytes_per_sec": 0 00:13:11.487 }, 00:13:11.487 "claimed": true, 00:13:11.487 "claim_type": "exclusive_write", 00:13:11.487 "zoned": false, 00:13:11.487 "supported_io_types": { 00:13:11.487 "read": true, 00:13:11.487 "write": true, 00:13:11.487 "unmap": true, 00:13:11.487 "flush": true, 00:13:11.487 "reset": true, 00:13:11.487 "nvme_admin": false, 00:13:11.487 "nvme_io": false, 00:13:11.487 "nvme_io_md": false, 00:13:11.487 "write_zeroes": true, 00:13:11.487 "zcopy": true, 00:13:11.487 "get_zone_info": false, 00:13:11.487 "zone_management": false, 00:13:11.487 "zone_append": false, 00:13:11.487 "compare": false, 00:13:11.487 "compare_and_write": false, 00:13:11.487 "abort": true, 00:13:11.487 "seek_hole": false, 00:13:11.487 "seek_data": false, 00:13:11.487 "copy": true, 00:13:11.487 "nvme_iov_md": false 00:13:11.487 }, 00:13:11.487 "memory_domains": [ 00:13:11.487 { 00:13:11.487 "dma_device_id": "system", 00:13:11.487 "dma_device_type": 1 00:13:11.487 }, 00:13:11.487 { 00:13:11.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.487 "dma_device_type": 2 00:13:11.487 } 00:13:11.487 ], 00:13:11.487 "driver_specific": {} 00:13:11.487 }' 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.487 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.747 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:12.006 "name": "BaseBdev2", 00:13:12.006 "aliases": [ 00:13:12.006 "7e873591-accb-4f42-bcbe-40faf05c8d02" 00:13:12.006 ], 00:13:12.006 "product_name": "Malloc disk", 00:13:12.006 "block_size": 512, 00:13:12.006 "num_blocks": 65536, 00:13:12.006 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:12.006 "assigned_rate_limits": { 00:13:12.006 "rw_ios_per_sec": 0, 00:13:12.006 "rw_mbytes_per_sec": 0, 00:13:12.006 "r_mbytes_per_sec": 0, 00:13:12.006 "w_mbytes_per_sec": 0 00:13:12.006 }, 00:13:12.006 "claimed": true, 00:13:12.006 "claim_type": "exclusive_write", 00:13:12.006 "zoned": false, 00:13:12.006 "supported_io_types": { 00:13:12.006 "read": true, 00:13:12.006 "write": true, 00:13:12.006 "unmap": true, 00:13:12.006 "flush": true, 00:13:12.006 "reset": true, 00:13:12.006 "nvme_admin": false, 00:13:12.006 "nvme_io": false, 00:13:12.006 "nvme_io_md": false, 00:13:12.006 "write_zeroes": true, 00:13:12.006 "zcopy": true, 00:13:12.006 "get_zone_info": false, 00:13:12.006 "zone_management": false, 00:13:12.006 "zone_append": false, 00:13:12.006 "compare": false, 00:13:12.006 "compare_and_write": false, 00:13:12.006 "abort": true, 00:13:12.006 "seek_hole": false, 00:13:12.006 "seek_data": false, 00:13:12.006 "copy": true, 00:13:12.006 "nvme_iov_md": false 00:13:12.006 }, 00:13:12.006 "memory_domains": [ 00:13:12.006 { 00:13:12.006 "dma_device_id": "system", 00:13:12.006 "dma_device_type": 1 00:13:12.006 }, 00:13:12.006 { 00:13:12.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.006 "dma_device_type": 2 00:13:12.006 } 00:13:12.006 ], 00:13:12.006 "driver_specific": {} 00:13:12.006 }' 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.006 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:12.265 06:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:12.524 "name": "BaseBdev3", 00:13:12.524 "aliases": [ 00:13:12.524 "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca" 00:13:12.524 ], 00:13:12.524 "product_name": "Malloc disk", 00:13:12.524 "block_size": 512, 00:13:12.524 "num_blocks": 65536, 00:13:12.524 "uuid": "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca", 00:13:12.524 "assigned_rate_limits": { 00:13:12.524 "rw_ios_per_sec": 0, 00:13:12.524 "rw_mbytes_per_sec": 0, 00:13:12.524 "r_mbytes_per_sec": 0, 00:13:12.524 "w_mbytes_per_sec": 0 00:13:12.524 }, 00:13:12.524 "claimed": true, 00:13:12.524 "claim_type": "exclusive_write", 00:13:12.524 "zoned": false, 00:13:12.524 "supported_io_types": { 00:13:12.524 "read": true, 00:13:12.524 "write": true, 00:13:12.524 "unmap": true, 00:13:12.524 "flush": true, 00:13:12.524 "reset": true, 00:13:12.524 "nvme_admin": false, 00:13:12.524 "nvme_io": false, 00:13:12.524 "nvme_io_md": false, 00:13:12.524 "write_zeroes": true, 00:13:12.524 "zcopy": true, 00:13:12.524 "get_zone_info": false, 00:13:12.524 "zone_management": false, 00:13:12.524 "zone_append": false, 00:13:12.524 "compare": false, 00:13:12.524 "compare_and_write": false, 00:13:12.524 "abort": true, 00:13:12.524 "seek_hole": false, 00:13:12.524 "seek_data": false, 00:13:12.524 "copy": true, 00:13:12.524 "nvme_iov_md": false 00:13:12.524 }, 00:13:12.524 "memory_domains": [ 00:13:12.524 { 00:13:12.524 "dma_device_id": "system", 00:13:12.524 "dma_device_type": 1 00:13:12.524 }, 00:13:12.524 { 00:13:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.524 "dma_device_type": 2 00:13:12.524 } 00:13:12.524 ], 00:13:12.524 "driver_specific": {} 00:13:12.524 }' 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:12.524 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:12.784 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:13.043 "name": "BaseBdev4", 00:13:13.043 "aliases": [ 00:13:13.043 "5c1c4feb-1826-4904-a80e-c36915d11ae7" 00:13:13.043 ], 00:13:13.043 "product_name": "Malloc disk", 00:13:13.043 "block_size": 512, 00:13:13.043 "num_blocks": 65536, 00:13:13.043 "uuid": "5c1c4feb-1826-4904-a80e-c36915d11ae7", 00:13:13.043 "assigned_rate_limits": { 00:13:13.043 "rw_ios_per_sec": 0, 00:13:13.043 "rw_mbytes_per_sec": 0, 00:13:13.043 "r_mbytes_per_sec": 0, 00:13:13.043 "w_mbytes_per_sec": 0 00:13:13.043 }, 00:13:13.043 "claimed": true, 00:13:13.043 "claim_type": "exclusive_write", 00:13:13.043 "zoned": false, 00:13:13.043 "supported_io_types": { 00:13:13.043 "read": true, 00:13:13.043 "write": true, 00:13:13.043 "unmap": true, 00:13:13.043 "flush": true, 00:13:13.043 "reset": true, 00:13:13.043 "nvme_admin": false, 00:13:13.043 "nvme_io": false, 00:13:13.043 "nvme_io_md": false, 00:13:13.043 "write_zeroes": true, 00:13:13.043 "zcopy": true, 00:13:13.043 "get_zone_info": false, 00:13:13.043 "zone_management": false, 00:13:13.043 "zone_append": false, 00:13:13.043 "compare": false, 00:13:13.043 "compare_and_write": false, 00:13:13.043 "abort": true, 00:13:13.043 "seek_hole": false, 00:13:13.043 "seek_data": false, 00:13:13.043 "copy": true, 00:13:13.043 "nvme_iov_md": false 00:13:13.043 }, 00:13:13.043 "memory_domains": [ 00:13:13.043 { 00:13:13.043 "dma_device_id": "system", 00:13:13.043 "dma_device_type": 1 00:13:13.043 }, 00:13:13.043 { 00:13:13.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.043 "dma_device_type": 2 00:13:13.043 } 00:13:13.043 ], 00:13:13.043 "driver_specific": {} 00:13:13.043 }' 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:13.043 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.302 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.302 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.302 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.302 06:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.302 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.302 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:13.562 [2024-08-13 06:08:15.197108] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.562 [2024-08-13 06:08:15.197134] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.562 [2024-08-13 06:08:15.197186] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.562 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.821 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.821 "name": "Existed_Raid", 00:13:13.821 "uuid": "5028f5f7-bdbe-4f21-a91e-ad04dcad78d7", 00:13:13.821 "strip_size_kb": 64, 00:13:13.821 "state": "offline", 00:13:13.821 "raid_level": "raid0", 00:13:13.821 "superblock": true, 00:13:13.821 "num_base_bdevs": 4, 00:13:13.821 "num_base_bdevs_discovered": 3, 00:13:13.821 "num_base_bdevs_operational": 3, 00:13:13.821 "base_bdevs_list": [ 00:13:13.821 { 00:13:13.821 "name": null, 00:13:13.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.821 "is_configured": false, 00:13:13.821 "data_offset": 2048, 00:13:13.821 "data_size": 63488 00:13:13.821 }, 00:13:13.821 { 00:13:13.821 "name": "BaseBdev2", 00:13:13.821 "uuid": "7e873591-accb-4f42-bcbe-40faf05c8d02", 00:13:13.821 "is_configured": true, 00:13:13.821 "data_offset": 2048, 00:13:13.821 "data_size": 63488 00:13:13.821 }, 00:13:13.821 { 00:13:13.821 "name": "BaseBdev3", 00:13:13.821 "uuid": "dd0a2302-a5ec-4e8c-a432-a3c2c93b49ca", 00:13:13.821 "is_configured": true, 00:13:13.821 "data_offset": 2048, 00:13:13.821 "data_size": 63488 00:13:13.821 }, 00:13:13.821 { 00:13:13.821 "name": "BaseBdev4", 00:13:13.821 "uuid": "5c1c4feb-1826-4904-a80e-c36915d11ae7", 00:13:13.821 "is_configured": true, 00:13:13.821 "data_offset": 2048, 00:13:13.821 "data_size": 63488 00:13:13.821 } 00:13:13.821 ] 00:13:13.821 }' 00:13:13.821 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.821 06:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.389 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:14.389 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:14.389 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.389 06:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:14.389 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:14.389 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.389 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:14.648 [2024-08-13 06:08:16.290497] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.648 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:14.648 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:14.648 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.648 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:14.907 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:14.907 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.907 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:14.907 [2024-08-13 06:08:16.680697] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.166 06:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:15.426 [2024-08-13 06:08:17.119044] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:15.426 [2024-08-13 06:08:17.119087] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:15.426 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:15.426 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:15.426 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.426 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.685 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:15.685 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:15.685 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:15.685 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:15.685 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:15.685 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.945 BaseBdev2 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:15.945 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.204 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:16.204 [ 00:13:16.204 { 00:13:16.204 "name": "BaseBdev2", 00:13:16.204 "aliases": [ 00:13:16.204 "f65dca7c-54da-46d4-884a-56f82381131d" 00:13:16.204 ], 00:13:16.204 "product_name": "Malloc disk", 00:13:16.204 "block_size": 512, 00:13:16.204 "num_blocks": 65536, 00:13:16.204 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:16.204 "assigned_rate_limits": { 00:13:16.204 "rw_ios_per_sec": 0, 00:13:16.204 "rw_mbytes_per_sec": 0, 00:13:16.204 "r_mbytes_per_sec": 0, 00:13:16.204 "w_mbytes_per_sec": 0 00:13:16.204 }, 00:13:16.204 "claimed": false, 00:13:16.204 "zoned": false, 00:13:16.204 "supported_io_types": { 00:13:16.204 "read": true, 00:13:16.204 "write": true, 00:13:16.204 "unmap": true, 00:13:16.204 "flush": true, 00:13:16.204 "reset": true, 00:13:16.204 "nvme_admin": false, 00:13:16.204 "nvme_io": false, 00:13:16.204 "nvme_io_md": false, 00:13:16.204 "write_zeroes": true, 00:13:16.204 "zcopy": true, 00:13:16.204 "get_zone_info": false, 00:13:16.204 "zone_management": false, 00:13:16.204 "zone_append": false, 00:13:16.204 "compare": false, 00:13:16.204 "compare_and_write": false, 00:13:16.204 "abort": true, 00:13:16.204 "seek_hole": false, 00:13:16.204 "seek_data": false, 00:13:16.204 "copy": true, 00:13:16.204 "nvme_iov_md": false 00:13:16.204 }, 00:13:16.204 "memory_domains": [ 00:13:16.204 { 00:13:16.204 "dma_device_id": "system", 00:13:16.204 "dma_device_type": 1 00:13:16.204 }, 00:13:16.204 { 00:13:16.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.204 "dma_device_type": 2 00:13:16.204 } 00:13:16.204 ], 00:13:16.204 "driver_specific": {} 00:13:16.204 } 00:13:16.204 ] 00:13:16.204 06:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:16.204 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:16.204 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:16.204 06:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.464 BaseBdev3 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:16.464 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.724 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.983 [ 00:13:16.983 { 00:13:16.983 "name": "BaseBdev3", 00:13:16.983 "aliases": [ 00:13:16.983 "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4" 00:13:16.983 ], 00:13:16.983 "product_name": "Malloc disk", 00:13:16.983 "block_size": 512, 00:13:16.983 "num_blocks": 65536, 00:13:16.983 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:16.983 "assigned_rate_limits": { 00:13:16.983 "rw_ios_per_sec": 0, 00:13:16.983 "rw_mbytes_per_sec": 0, 00:13:16.983 "r_mbytes_per_sec": 0, 00:13:16.983 "w_mbytes_per_sec": 0 00:13:16.983 }, 00:13:16.983 "claimed": false, 00:13:16.984 "zoned": false, 00:13:16.984 "supported_io_types": { 00:13:16.984 "read": true, 00:13:16.984 "write": true, 00:13:16.984 "unmap": true, 00:13:16.984 "flush": true, 00:13:16.984 "reset": true, 00:13:16.984 "nvme_admin": false, 00:13:16.984 "nvme_io": false, 00:13:16.984 "nvme_io_md": false, 00:13:16.984 "write_zeroes": true, 00:13:16.984 "zcopy": true, 00:13:16.984 "get_zone_info": false, 00:13:16.984 "zone_management": false, 00:13:16.984 "zone_append": false, 00:13:16.984 "compare": false, 00:13:16.984 "compare_and_write": false, 00:13:16.984 "abort": true, 00:13:16.984 "seek_hole": false, 00:13:16.984 "seek_data": false, 00:13:16.984 "copy": true, 00:13:16.984 "nvme_iov_md": false 00:13:16.984 }, 00:13:16.984 "memory_domains": [ 00:13:16.984 { 00:13:16.984 "dma_device_id": "system", 00:13:16.984 "dma_device_type": 1 00:13:16.984 }, 00:13:16.984 { 00:13:16.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.984 "dma_device_type": 2 00:13:16.984 } 00:13:16.984 ], 00:13:16.984 "driver_specific": {} 00:13:16.984 } 00:13:16.984 ] 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:16.984 BaseBdev4 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:16.984 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:17.243 06:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:17.503 [ 00:13:17.503 { 00:13:17.503 "name": "BaseBdev4", 00:13:17.503 "aliases": [ 00:13:17.503 "2ea858e9-ea20-487c-ad15-be12fdb4920e" 00:13:17.503 ], 00:13:17.503 "product_name": "Malloc disk", 00:13:17.503 "block_size": 512, 00:13:17.503 "num_blocks": 65536, 00:13:17.503 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:17.503 "assigned_rate_limits": { 00:13:17.503 "rw_ios_per_sec": 0, 00:13:17.503 "rw_mbytes_per_sec": 0, 00:13:17.503 "r_mbytes_per_sec": 0, 00:13:17.503 "w_mbytes_per_sec": 0 00:13:17.503 }, 00:13:17.503 "claimed": false, 00:13:17.503 "zoned": false, 00:13:17.503 "supported_io_types": { 00:13:17.503 "read": true, 00:13:17.503 "write": true, 00:13:17.503 "unmap": true, 00:13:17.503 "flush": true, 00:13:17.503 "reset": true, 00:13:17.503 "nvme_admin": false, 00:13:17.503 "nvme_io": false, 00:13:17.503 "nvme_io_md": false, 00:13:17.503 "write_zeroes": true, 00:13:17.503 "zcopy": true, 00:13:17.503 "get_zone_info": false, 00:13:17.503 "zone_management": false, 00:13:17.503 "zone_append": false, 00:13:17.503 "compare": false, 00:13:17.503 "compare_and_write": false, 00:13:17.503 "abort": true, 00:13:17.503 "seek_hole": false, 00:13:17.503 "seek_data": false, 00:13:17.503 "copy": true, 00:13:17.503 "nvme_iov_md": false 00:13:17.503 }, 00:13:17.503 "memory_domains": [ 00:13:17.503 { 00:13:17.503 "dma_device_id": "system", 00:13:17.503 "dma_device_type": 1 00:13:17.503 }, 00:13:17.503 { 00:13:17.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.503 "dma_device_type": 2 00:13:17.503 } 00:13:17.503 ], 00:13:17.503 "driver_specific": {} 00:13:17.503 } 00:13:17.503 ] 00:13:17.503 06:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:17.503 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:17.503 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:17.503 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:17.764 [2024-08-13 06:08:19.320593] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.764 [2024-08-13 06:08:19.320731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.764 [2024-08-13 06:08:19.320781] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.764 [2024-08-13 06:08:19.322820] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.764 [2024-08-13 06:08:19.322923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.764 "name": "Existed_Raid", 00:13:17.764 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:17.764 "strip_size_kb": 64, 00:13:17.764 "state": "configuring", 00:13:17.764 "raid_level": "raid0", 00:13:17.764 "superblock": true, 00:13:17.764 "num_base_bdevs": 4, 00:13:17.764 "num_base_bdevs_discovered": 3, 00:13:17.764 "num_base_bdevs_operational": 4, 00:13:17.764 "base_bdevs_list": [ 00:13:17.764 { 00:13:17.764 "name": "BaseBdev1", 00:13:17.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.764 "is_configured": false, 00:13:17.764 "data_offset": 0, 00:13:17.764 "data_size": 0 00:13:17.764 }, 00:13:17.764 { 00:13:17.764 "name": "BaseBdev2", 00:13:17.764 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:17.764 "is_configured": true, 00:13:17.764 "data_offset": 2048, 00:13:17.764 "data_size": 63488 00:13:17.764 }, 00:13:17.764 { 00:13:17.764 "name": "BaseBdev3", 00:13:17.764 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:17.764 "is_configured": true, 00:13:17.764 "data_offset": 2048, 00:13:17.764 "data_size": 63488 00:13:17.764 }, 00:13:17.764 { 00:13:17.764 "name": "BaseBdev4", 00:13:17.764 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:17.764 "is_configured": true, 00:13:17.764 "data_offset": 2048, 00:13:17.764 "data_size": 63488 00:13:17.764 } 00:13:17.764 ] 00:13:17.764 }' 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.764 06:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.334 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:18.594 [2024-08-13 06:08:20.207013] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.594 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.853 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.854 "name": "Existed_Raid", 00:13:18.854 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:18.854 "strip_size_kb": 64, 00:13:18.854 "state": "configuring", 00:13:18.854 "raid_level": "raid0", 00:13:18.854 "superblock": true, 00:13:18.854 "num_base_bdevs": 4, 00:13:18.854 "num_base_bdevs_discovered": 2, 00:13:18.854 "num_base_bdevs_operational": 4, 00:13:18.854 "base_bdevs_list": [ 00:13:18.854 { 00:13:18.854 "name": "BaseBdev1", 00:13:18.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.854 "is_configured": false, 00:13:18.854 "data_offset": 0, 00:13:18.854 "data_size": 0 00:13:18.854 }, 00:13:18.854 { 00:13:18.854 "name": null, 00:13:18.854 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:18.854 "is_configured": false, 00:13:18.854 "data_offset": 2048, 00:13:18.854 "data_size": 63488 00:13:18.854 }, 00:13:18.854 { 00:13:18.854 "name": "BaseBdev3", 00:13:18.854 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:18.854 "is_configured": true, 00:13:18.854 "data_offset": 2048, 00:13:18.854 "data_size": 63488 00:13:18.854 }, 00:13:18.854 { 00:13:18.854 "name": "BaseBdev4", 00:13:18.854 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:18.854 "is_configured": true, 00:13:18.854 "data_offset": 2048, 00:13:18.854 "data_size": 63488 00:13:18.854 } 00:13:18.854 ] 00:13:18.854 }' 00:13:18.854 06:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.854 06:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.423 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.423 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.683 [2024-08-13 06:08:21.417692] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.683 BaseBdev1 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:19.683 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:19.942 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:20.202 [ 00:13:20.202 { 00:13:20.202 "name": "BaseBdev1", 00:13:20.202 "aliases": [ 00:13:20.202 "be2f08c6-b8e0-4234-9efc-438aab7dbcc3" 00:13:20.202 ], 00:13:20.202 "product_name": "Malloc disk", 00:13:20.202 "block_size": 512, 00:13:20.202 "num_blocks": 65536, 00:13:20.202 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:20.202 "assigned_rate_limits": { 00:13:20.202 "rw_ios_per_sec": 0, 00:13:20.202 "rw_mbytes_per_sec": 0, 00:13:20.202 "r_mbytes_per_sec": 0, 00:13:20.202 "w_mbytes_per_sec": 0 00:13:20.202 }, 00:13:20.202 "claimed": true, 00:13:20.202 "claim_type": "exclusive_write", 00:13:20.202 "zoned": false, 00:13:20.202 "supported_io_types": { 00:13:20.202 "read": true, 00:13:20.202 "write": true, 00:13:20.202 "unmap": true, 00:13:20.202 "flush": true, 00:13:20.202 "reset": true, 00:13:20.202 "nvme_admin": false, 00:13:20.202 "nvme_io": false, 00:13:20.202 "nvme_io_md": false, 00:13:20.202 "write_zeroes": true, 00:13:20.202 "zcopy": true, 00:13:20.202 "get_zone_info": false, 00:13:20.202 "zone_management": false, 00:13:20.202 "zone_append": false, 00:13:20.202 "compare": false, 00:13:20.202 "compare_and_write": false, 00:13:20.202 "abort": true, 00:13:20.202 "seek_hole": false, 00:13:20.202 "seek_data": false, 00:13:20.202 "copy": true, 00:13:20.202 "nvme_iov_md": false 00:13:20.202 }, 00:13:20.202 "memory_domains": [ 00:13:20.202 { 00:13:20.202 "dma_device_id": "system", 00:13:20.202 "dma_device_type": 1 00:13:20.202 }, 00:13:20.202 { 00:13:20.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.202 "dma_device_type": 2 00:13:20.202 } 00:13:20.202 ], 00:13:20.202 "driver_specific": {} 00:13:20.202 } 00:13:20.202 ] 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.202 06:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.462 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.462 "name": "Existed_Raid", 00:13:20.462 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:20.462 "strip_size_kb": 64, 00:13:20.462 "state": "configuring", 00:13:20.462 "raid_level": "raid0", 00:13:20.462 "superblock": true, 00:13:20.462 "num_base_bdevs": 4, 00:13:20.462 "num_base_bdevs_discovered": 3, 00:13:20.462 "num_base_bdevs_operational": 4, 00:13:20.462 "base_bdevs_list": [ 00:13:20.462 { 00:13:20.462 "name": "BaseBdev1", 00:13:20.462 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:20.462 "is_configured": true, 00:13:20.462 "data_offset": 2048, 00:13:20.462 "data_size": 63488 00:13:20.462 }, 00:13:20.462 { 00:13:20.462 "name": null, 00:13:20.462 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:20.462 "is_configured": false, 00:13:20.462 "data_offset": 2048, 00:13:20.462 "data_size": 63488 00:13:20.462 }, 00:13:20.462 { 00:13:20.462 "name": "BaseBdev3", 00:13:20.462 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:20.462 "is_configured": true, 00:13:20.462 "data_offset": 2048, 00:13:20.462 "data_size": 63488 00:13:20.462 }, 00:13:20.462 { 00:13:20.462 "name": "BaseBdev4", 00:13:20.462 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:20.462 "is_configured": true, 00:13:20.462 "data_offset": 2048, 00:13:20.462 "data_size": 63488 00:13:20.462 } 00:13:20.462 ] 00:13:20.462 }' 00:13:20.463 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.463 06:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.031 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.031 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.031 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:21.031 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:21.291 [2024-08-13 06:08:22.931121] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.291 06:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.551 06:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.551 "name": "Existed_Raid", 00:13:21.551 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:21.551 "strip_size_kb": 64, 00:13:21.551 "state": "configuring", 00:13:21.551 "raid_level": "raid0", 00:13:21.551 "superblock": true, 00:13:21.551 "num_base_bdevs": 4, 00:13:21.551 "num_base_bdevs_discovered": 2, 00:13:21.551 "num_base_bdevs_operational": 4, 00:13:21.551 "base_bdevs_list": [ 00:13:21.551 { 00:13:21.551 "name": "BaseBdev1", 00:13:21.551 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:21.551 "is_configured": true, 00:13:21.551 "data_offset": 2048, 00:13:21.551 "data_size": 63488 00:13:21.551 }, 00:13:21.551 { 00:13:21.551 "name": null, 00:13:21.551 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:21.551 "is_configured": false, 00:13:21.551 "data_offset": 2048, 00:13:21.551 "data_size": 63488 00:13:21.551 }, 00:13:21.551 { 00:13:21.551 "name": null, 00:13:21.551 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:21.551 "is_configured": false, 00:13:21.551 "data_offset": 2048, 00:13:21.551 "data_size": 63488 00:13:21.551 }, 00:13:21.551 { 00:13:21.551 "name": "BaseBdev4", 00:13:21.551 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:21.551 "is_configured": true, 00:13:21.551 "data_offset": 2048, 00:13:21.551 "data_size": 63488 00:13:21.551 } 00:13:21.551 ] 00:13:21.551 }' 00:13:21.551 06:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.551 06:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.119 06:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.119 06:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:22.378 06:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:22.378 06:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:22.378 [2024-08-13 06:08:24.117212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.378 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.638 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.638 "name": "Existed_Raid", 00:13:22.638 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:22.638 "strip_size_kb": 64, 00:13:22.638 "state": "configuring", 00:13:22.638 "raid_level": "raid0", 00:13:22.638 "superblock": true, 00:13:22.638 "num_base_bdevs": 4, 00:13:22.638 "num_base_bdevs_discovered": 3, 00:13:22.638 "num_base_bdevs_operational": 4, 00:13:22.638 "base_bdevs_list": [ 00:13:22.638 { 00:13:22.638 "name": "BaseBdev1", 00:13:22.638 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:22.638 "is_configured": true, 00:13:22.638 "data_offset": 2048, 00:13:22.638 "data_size": 63488 00:13:22.638 }, 00:13:22.638 { 00:13:22.638 "name": null, 00:13:22.638 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:22.638 "is_configured": false, 00:13:22.638 "data_offset": 2048, 00:13:22.638 "data_size": 63488 00:13:22.638 }, 00:13:22.638 { 00:13:22.638 "name": "BaseBdev3", 00:13:22.638 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:22.638 "is_configured": true, 00:13:22.638 "data_offset": 2048, 00:13:22.638 "data_size": 63488 00:13:22.638 }, 00:13:22.638 { 00:13:22.638 "name": "BaseBdev4", 00:13:22.638 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:22.638 "is_configured": true, 00:13:22.638 "data_offset": 2048, 00:13:22.638 "data_size": 63488 00:13:22.638 } 00:13:22.638 ] 00:13:22.638 }' 00:13:22.638 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.638 06:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.207 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:23.207 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.207 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:23.207 06:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:23.466 [2024-08-13 06:08:25.167670] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.466 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.725 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.725 "name": "Existed_Raid", 00:13:23.725 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:23.725 "strip_size_kb": 64, 00:13:23.725 "state": "configuring", 00:13:23.725 "raid_level": "raid0", 00:13:23.725 "superblock": true, 00:13:23.725 "num_base_bdevs": 4, 00:13:23.725 "num_base_bdevs_discovered": 2, 00:13:23.725 "num_base_bdevs_operational": 4, 00:13:23.725 "base_bdevs_list": [ 00:13:23.725 { 00:13:23.725 "name": null, 00:13:23.725 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:23.725 "is_configured": false, 00:13:23.725 "data_offset": 2048, 00:13:23.725 "data_size": 63488 00:13:23.725 }, 00:13:23.725 { 00:13:23.725 "name": null, 00:13:23.725 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:23.725 "is_configured": false, 00:13:23.725 "data_offset": 2048, 00:13:23.725 "data_size": 63488 00:13:23.725 }, 00:13:23.725 { 00:13:23.725 "name": "BaseBdev3", 00:13:23.725 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:23.725 "is_configured": true, 00:13:23.725 "data_offset": 2048, 00:13:23.725 "data_size": 63488 00:13:23.725 }, 00:13:23.725 { 00:13:23.725 "name": "BaseBdev4", 00:13:23.725 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:23.725 "is_configured": true, 00:13:23.725 "data_offset": 2048, 00:13:23.726 "data_size": 63488 00:13:23.726 } 00:13:23.726 ] 00:13:23.726 }' 00:13:23.726 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.726 06:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.295 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.295 06:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.555 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:24.555 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:24.555 [2024-08-13 06:08:26.333688] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.814 "name": "Existed_Raid", 00:13:24.814 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:24.814 "strip_size_kb": 64, 00:13:24.814 "state": "configuring", 00:13:24.814 "raid_level": "raid0", 00:13:24.814 "superblock": true, 00:13:24.814 "num_base_bdevs": 4, 00:13:24.814 "num_base_bdevs_discovered": 3, 00:13:24.814 "num_base_bdevs_operational": 4, 00:13:24.814 "base_bdevs_list": [ 00:13:24.814 { 00:13:24.814 "name": null, 00:13:24.814 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:24.814 "is_configured": false, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "name": "BaseBdev2", 00:13:24.814 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:24.814 "is_configured": true, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "name": "BaseBdev3", 00:13:24.814 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:24.814 "is_configured": true, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 }, 00:13:24.814 { 00:13:24.814 "name": "BaseBdev4", 00:13:24.814 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:24.814 "is_configured": true, 00:13:24.814 "data_offset": 2048, 00:13:24.814 "data_size": 63488 00:13:24.814 } 00:13:24.814 ] 00:13:24.814 }' 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.814 06:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.384 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.384 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:25.644 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:25.644 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:25.644 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u be2f08c6-b8e0-4234-9efc-438aab7dbcc3 00:13:25.904 [2024-08-13 06:08:27.656115] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:25.904 [2024-08-13 06:08:27.656448] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:25.904 [2024-08-13 06:08:27.656499] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:25.904 [2024-08-13 06:08:27.656855] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:25.904 [2024-08-13 06:08:27.657044] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:25.904 [2024-08-13 06:08:27.657085] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:25.904 [2024-08-13 06:08:27.657227] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.904 NewBaseBdev 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:25.904 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.163 06:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:26.422 [ 00:13:26.422 { 00:13:26.422 "name": "NewBaseBdev", 00:13:26.422 "aliases": [ 00:13:26.422 "be2f08c6-b8e0-4234-9efc-438aab7dbcc3" 00:13:26.422 ], 00:13:26.422 "product_name": "Malloc disk", 00:13:26.422 "block_size": 512, 00:13:26.422 "num_blocks": 65536, 00:13:26.422 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:26.422 "assigned_rate_limits": { 00:13:26.422 "rw_ios_per_sec": 0, 00:13:26.422 "rw_mbytes_per_sec": 0, 00:13:26.422 "r_mbytes_per_sec": 0, 00:13:26.422 "w_mbytes_per_sec": 0 00:13:26.422 }, 00:13:26.422 "claimed": true, 00:13:26.422 "claim_type": "exclusive_write", 00:13:26.422 "zoned": false, 00:13:26.422 "supported_io_types": { 00:13:26.422 "read": true, 00:13:26.422 "write": true, 00:13:26.422 "unmap": true, 00:13:26.422 "flush": true, 00:13:26.422 "reset": true, 00:13:26.422 "nvme_admin": false, 00:13:26.422 "nvme_io": false, 00:13:26.422 "nvme_io_md": false, 00:13:26.422 "write_zeroes": true, 00:13:26.422 "zcopy": true, 00:13:26.422 "get_zone_info": false, 00:13:26.422 "zone_management": false, 00:13:26.422 "zone_append": false, 00:13:26.422 "compare": false, 00:13:26.422 "compare_and_write": false, 00:13:26.422 "abort": true, 00:13:26.422 "seek_hole": false, 00:13:26.422 "seek_data": false, 00:13:26.422 "copy": true, 00:13:26.422 "nvme_iov_md": false 00:13:26.422 }, 00:13:26.422 "memory_domains": [ 00:13:26.422 { 00:13:26.422 "dma_device_id": "system", 00:13:26.422 "dma_device_type": 1 00:13:26.422 }, 00:13:26.422 { 00:13:26.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.422 "dma_device_type": 2 00:13:26.422 } 00:13:26.422 ], 00:13:26.422 "driver_specific": {} 00:13:26.422 } 00:13:26.422 ] 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.422 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.691 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.691 "name": "Existed_Raid", 00:13:26.691 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:26.691 "strip_size_kb": 64, 00:13:26.691 "state": "online", 00:13:26.691 "raid_level": "raid0", 00:13:26.691 "superblock": true, 00:13:26.691 "num_base_bdevs": 4, 00:13:26.691 "num_base_bdevs_discovered": 4, 00:13:26.691 "num_base_bdevs_operational": 4, 00:13:26.691 "base_bdevs_list": [ 00:13:26.691 { 00:13:26.691 "name": "NewBaseBdev", 00:13:26.691 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:26.691 "is_configured": true, 00:13:26.691 "data_offset": 2048, 00:13:26.691 "data_size": 63488 00:13:26.691 }, 00:13:26.691 { 00:13:26.691 "name": "BaseBdev2", 00:13:26.691 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:26.691 "is_configured": true, 00:13:26.691 "data_offset": 2048, 00:13:26.691 "data_size": 63488 00:13:26.691 }, 00:13:26.691 { 00:13:26.691 "name": "BaseBdev3", 00:13:26.691 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:26.691 "is_configured": true, 00:13:26.691 "data_offset": 2048, 00:13:26.691 "data_size": 63488 00:13:26.691 }, 00:13:26.691 { 00:13:26.691 "name": "BaseBdev4", 00:13:26.691 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:26.691 "is_configured": true, 00:13:26.691 "data_offset": 2048, 00:13:26.691 "data_size": 63488 00:13:26.691 } 00:13:26.691 ] 00:13:26.691 }' 00:13:26.691 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.691 06:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:27.302 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:27.302 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:27.302 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:27.302 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:27.302 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:27.303 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:27.303 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:27.303 [2024-08-13 06:08:28.950239] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.303 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:27.303 "name": "Existed_Raid", 00:13:27.303 "aliases": [ 00:13:27.303 "cac885a0-6150-438d-a3d9-17c4cc638e6c" 00:13:27.303 ], 00:13:27.303 "product_name": "Raid Volume", 00:13:27.303 "block_size": 512, 00:13:27.303 "num_blocks": 253952, 00:13:27.303 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:27.303 "assigned_rate_limits": { 00:13:27.303 "rw_ios_per_sec": 0, 00:13:27.303 "rw_mbytes_per_sec": 0, 00:13:27.303 "r_mbytes_per_sec": 0, 00:13:27.303 "w_mbytes_per_sec": 0 00:13:27.303 }, 00:13:27.303 "claimed": false, 00:13:27.303 "zoned": false, 00:13:27.303 "supported_io_types": { 00:13:27.303 "read": true, 00:13:27.303 "write": true, 00:13:27.303 "unmap": true, 00:13:27.303 "flush": true, 00:13:27.303 "reset": true, 00:13:27.303 "nvme_admin": false, 00:13:27.303 "nvme_io": false, 00:13:27.303 "nvme_io_md": false, 00:13:27.303 "write_zeroes": true, 00:13:27.303 "zcopy": false, 00:13:27.303 "get_zone_info": false, 00:13:27.303 "zone_management": false, 00:13:27.303 "zone_append": false, 00:13:27.303 "compare": false, 00:13:27.303 "compare_and_write": false, 00:13:27.303 "abort": false, 00:13:27.303 "seek_hole": false, 00:13:27.303 "seek_data": false, 00:13:27.303 "copy": false, 00:13:27.303 "nvme_iov_md": false 00:13:27.303 }, 00:13:27.303 "memory_domains": [ 00:13:27.303 { 00:13:27.303 "dma_device_id": "system", 00:13:27.303 "dma_device_type": 1 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.303 "dma_device_type": 2 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "system", 00:13:27.303 "dma_device_type": 1 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.303 "dma_device_type": 2 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "system", 00:13:27.303 "dma_device_type": 1 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.303 "dma_device_type": 2 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "system", 00:13:27.303 "dma_device_type": 1 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.303 "dma_device_type": 2 00:13:27.303 } 00:13:27.303 ], 00:13:27.303 "driver_specific": { 00:13:27.303 "raid": { 00:13:27.303 "uuid": "cac885a0-6150-438d-a3d9-17c4cc638e6c", 00:13:27.303 "strip_size_kb": 64, 00:13:27.303 "state": "online", 00:13:27.303 "raid_level": "raid0", 00:13:27.303 "superblock": true, 00:13:27.303 "num_base_bdevs": 4, 00:13:27.303 "num_base_bdevs_discovered": 4, 00:13:27.303 "num_base_bdevs_operational": 4, 00:13:27.303 "base_bdevs_list": [ 00:13:27.303 { 00:13:27.303 "name": "NewBaseBdev", 00:13:27.303 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:27.303 "is_configured": true, 00:13:27.303 "data_offset": 2048, 00:13:27.303 "data_size": 63488 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "name": "BaseBdev2", 00:13:27.303 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:27.303 "is_configured": true, 00:13:27.303 "data_offset": 2048, 00:13:27.303 "data_size": 63488 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "name": "BaseBdev3", 00:13:27.303 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:27.303 "is_configured": true, 00:13:27.303 "data_offset": 2048, 00:13:27.303 "data_size": 63488 00:13:27.303 }, 00:13:27.303 { 00:13:27.303 "name": "BaseBdev4", 00:13:27.303 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:27.303 "is_configured": true, 00:13:27.303 "data_offset": 2048, 00:13:27.303 "data_size": 63488 00:13:27.303 } 00:13:27.303 ] 00:13:27.303 } 00:13:27.303 } 00:13:27.303 }' 00:13:27.303 06:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.303 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:27.303 BaseBdev2 00:13:27.303 BaseBdev3 00:13:27.303 BaseBdev4' 00:13:27.303 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.303 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:27.303 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.563 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.563 "name": "NewBaseBdev", 00:13:27.563 "aliases": [ 00:13:27.563 "be2f08c6-b8e0-4234-9efc-438aab7dbcc3" 00:13:27.563 ], 00:13:27.563 "product_name": "Malloc disk", 00:13:27.563 "block_size": 512, 00:13:27.563 "num_blocks": 65536, 00:13:27.563 "uuid": "be2f08c6-b8e0-4234-9efc-438aab7dbcc3", 00:13:27.563 "assigned_rate_limits": { 00:13:27.563 "rw_ios_per_sec": 0, 00:13:27.563 "rw_mbytes_per_sec": 0, 00:13:27.563 "r_mbytes_per_sec": 0, 00:13:27.563 "w_mbytes_per_sec": 0 00:13:27.563 }, 00:13:27.563 "claimed": true, 00:13:27.563 "claim_type": "exclusive_write", 00:13:27.563 "zoned": false, 00:13:27.563 "supported_io_types": { 00:13:27.563 "read": true, 00:13:27.563 "write": true, 00:13:27.563 "unmap": true, 00:13:27.563 "flush": true, 00:13:27.563 "reset": true, 00:13:27.563 "nvme_admin": false, 00:13:27.563 "nvme_io": false, 00:13:27.563 "nvme_io_md": false, 00:13:27.563 "write_zeroes": true, 00:13:27.563 "zcopy": true, 00:13:27.563 "get_zone_info": false, 00:13:27.563 "zone_management": false, 00:13:27.563 "zone_append": false, 00:13:27.563 "compare": false, 00:13:27.563 "compare_and_write": false, 00:13:27.563 "abort": true, 00:13:27.563 "seek_hole": false, 00:13:27.563 "seek_data": false, 00:13:27.563 "copy": true, 00:13:27.563 "nvme_iov_md": false 00:13:27.563 }, 00:13:27.563 "memory_domains": [ 00:13:27.563 { 00:13:27.563 "dma_device_id": "system", 00:13:27.563 "dma_device_type": 1 00:13:27.563 }, 00:13:27.563 { 00:13:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.563 "dma_device_type": 2 00:13:27.563 } 00:13:27.563 ], 00:13:27.563 "driver_specific": {} 00:13:27.563 }' 00:13:27.563 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.563 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.563 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.563 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:27.823 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:28.082 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:28.082 "name": "BaseBdev2", 00:13:28.082 "aliases": [ 00:13:28.082 "f65dca7c-54da-46d4-884a-56f82381131d" 00:13:28.082 ], 00:13:28.082 "product_name": "Malloc disk", 00:13:28.082 "block_size": 512, 00:13:28.082 "num_blocks": 65536, 00:13:28.082 "uuid": "f65dca7c-54da-46d4-884a-56f82381131d", 00:13:28.082 "assigned_rate_limits": { 00:13:28.082 "rw_ios_per_sec": 0, 00:13:28.082 "rw_mbytes_per_sec": 0, 00:13:28.082 "r_mbytes_per_sec": 0, 00:13:28.082 "w_mbytes_per_sec": 0 00:13:28.082 }, 00:13:28.082 "claimed": true, 00:13:28.082 "claim_type": "exclusive_write", 00:13:28.082 "zoned": false, 00:13:28.082 "supported_io_types": { 00:13:28.082 "read": true, 00:13:28.082 "write": true, 00:13:28.083 "unmap": true, 00:13:28.083 "flush": true, 00:13:28.083 "reset": true, 00:13:28.083 "nvme_admin": false, 00:13:28.083 "nvme_io": false, 00:13:28.083 "nvme_io_md": false, 00:13:28.083 "write_zeroes": true, 00:13:28.083 "zcopy": true, 00:13:28.083 "get_zone_info": false, 00:13:28.083 "zone_management": false, 00:13:28.083 "zone_append": false, 00:13:28.083 "compare": false, 00:13:28.083 "compare_and_write": false, 00:13:28.083 "abort": true, 00:13:28.083 "seek_hole": false, 00:13:28.083 "seek_data": false, 00:13:28.083 "copy": true, 00:13:28.083 "nvme_iov_md": false 00:13:28.083 }, 00:13:28.083 "memory_domains": [ 00:13:28.083 { 00:13:28.083 "dma_device_id": "system", 00:13:28.083 "dma_device_type": 1 00:13:28.083 }, 00:13:28.083 { 00:13:28.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.083 "dma_device_type": 2 00:13:28.083 } 00:13:28.083 ], 00:13:28.083 "driver_specific": {} 00:13:28.083 }' 00:13:28.083 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.083 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.083 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:28.083 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.342 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.342 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:28.342 06:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.342 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.342 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.342 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.342 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.601 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:28.601 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:28.601 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:28.601 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:28.601 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:28.601 "name": "BaseBdev3", 00:13:28.601 "aliases": [ 00:13:28.602 "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4" 00:13:28.602 ], 00:13:28.602 "product_name": "Malloc disk", 00:13:28.602 "block_size": 512, 00:13:28.602 "num_blocks": 65536, 00:13:28.602 "uuid": "fe82bbfe-98ac-4dba-8f0b-55ec85fc3fc4", 00:13:28.602 "assigned_rate_limits": { 00:13:28.602 "rw_ios_per_sec": 0, 00:13:28.602 "rw_mbytes_per_sec": 0, 00:13:28.602 "r_mbytes_per_sec": 0, 00:13:28.602 "w_mbytes_per_sec": 0 00:13:28.602 }, 00:13:28.602 "claimed": true, 00:13:28.602 "claim_type": "exclusive_write", 00:13:28.602 "zoned": false, 00:13:28.602 "supported_io_types": { 00:13:28.602 "read": true, 00:13:28.602 "write": true, 00:13:28.602 "unmap": true, 00:13:28.602 "flush": true, 00:13:28.602 "reset": true, 00:13:28.602 "nvme_admin": false, 00:13:28.602 "nvme_io": false, 00:13:28.602 "nvme_io_md": false, 00:13:28.602 "write_zeroes": true, 00:13:28.602 "zcopy": true, 00:13:28.602 "get_zone_info": false, 00:13:28.602 "zone_management": false, 00:13:28.602 "zone_append": false, 00:13:28.602 "compare": false, 00:13:28.602 "compare_and_write": false, 00:13:28.602 "abort": true, 00:13:28.602 "seek_hole": false, 00:13:28.602 "seek_data": false, 00:13:28.602 "copy": true, 00:13:28.602 "nvme_iov_md": false 00:13:28.602 }, 00:13:28.602 "memory_domains": [ 00:13:28.602 { 00:13:28.602 "dma_device_id": "system", 00:13:28.602 "dma_device_type": 1 00:13:28.602 }, 00:13:28.602 { 00:13:28.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.602 "dma_device_type": 2 00:13:28.602 } 00:13:28.602 ], 00:13:28.602 "driver_specific": {} 00:13:28.602 }' 00:13:28.602 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.860 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:29.119 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:29.119 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:29.119 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:29.119 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:29.119 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:29.119 "name": "BaseBdev4", 00:13:29.119 "aliases": [ 00:13:29.119 "2ea858e9-ea20-487c-ad15-be12fdb4920e" 00:13:29.119 ], 00:13:29.119 "product_name": "Malloc disk", 00:13:29.119 "block_size": 512, 00:13:29.119 "num_blocks": 65536, 00:13:29.119 "uuid": "2ea858e9-ea20-487c-ad15-be12fdb4920e", 00:13:29.119 "assigned_rate_limits": { 00:13:29.119 "rw_ios_per_sec": 0, 00:13:29.119 "rw_mbytes_per_sec": 0, 00:13:29.119 "r_mbytes_per_sec": 0, 00:13:29.119 "w_mbytes_per_sec": 0 00:13:29.119 }, 00:13:29.119 "claimed": true, 00:13:29.119 "claim_type": "exclusive_write", 00:13:29.119 "zoned": false, 00:13:29.119 "supported_io_types": { 00:13:29.119 "read": true, 00:13:29.119 "write": true, 00:13:29.119 "unmap": true, 00:13:29.119 "flush": true, 00:13:29.119 "reset": true, 00:13:29.119 "nvme_admin": false, 00:13:29.119 "nvme_io": false, 00:13:29.119 "nvme_io_md": false, 00:13:29.119 "write_zeroes": true, 00:13:29.119 "zcopy": true, 00:13:29.119 "get_zone_info": false, 00:13:29.119 "zone_management": false, 00:13:29.119 "zone_append": false, 00:13:29.119 "compare": false, 00:13:29.119 "compare_and_write": false, 00:13:29.119 "abort": true, 00:13:29.119 "seek_hole": false, 00:13:29.120 "seek_data": false, 00:13:29.120 "copy": true, 00:13:29.120 "nvme_iov_md": false 00:13:29.120 }, 00:13:29.120 "memory_domains": [ 00:13:29.120 { 00:13:29.120 "dma_device_id": "system", 00:13:29.120 "dma_device_type": 1 00:13:29.120 }, 00:13:29.120 { 00:13:29.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.120 "dma_device_type": 2 00:13:29.120 } 00:13:29.120 ], 00:13:29.120 "driver_specific": {} 00:13:29.120 }' 00:13:29.120 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:29.379 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:29.379 06:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:29.379 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:29.379 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:29.379 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:29.379 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:29.379 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:29.639 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:29.639 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:29.639 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:29.639 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:29.639 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:29.898 [2024-08-13 06:08:31.437756] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.898 [2024-08-13 06:08:31.437787] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.898 [2024-08-13 06:08:31.437902] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.898 [2024-08-13 06:08:31.437982] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.898 [2024-08-13 06:08:31.437998] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 84171 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 84171 ']' 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 84171 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84171 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:29.898 killing process with pid 84171 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84171' 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 84171 00:13:29.898 [2024-08-13 06:08:31.507419] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.898 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 84171 00:13:29.898 [2024-08-13 06:08:31.584595] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.157 06:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:30.158 00:13:30.158 real 0m28.118s 00:13:30.158 user 0m51.801s 00:13:30.158 sys 0m4.590s 00:13:30.158 ************************************ 00:13:30.158 END TEST raid_state_function_test_sb 00:13:30.158 ************************************ 00:13:30.158 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.158 06:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 06:08:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:30.417 06:08:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:30.417 06:08:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.418 06:08:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:30.418 ************************************ 00:13:30.418 START TEST raid_superblock_test 00:13:30.418 ************************************ 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=85188 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 85188 /var/tmp/spdk-raid.sock 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 85188 ']' 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:30.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:30.418 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.418 [2024-08-13 06:08:32.114657] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:13:30.418 [2024-08-13 06:08:32.114774] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85188 ] 00:13:30.677 [2024-08-13 06:08:32.260732] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.677 [2024-08-13 06:08:32.306123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.677 [2024-08-13 06:08:32.348263] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.677 [2024-08-13 06:08:32.348303] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.244 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:31.244 06:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:31.245 06:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:31.503 malloc1 00:13:31.504 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:31.767 [2024-08-13 06:08:33.312198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:31.767 [2024-08-13 06:08:33.312346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.767 [2024-08-13 06:08:33.312394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:31.767 [2024-08-13 06:08:33.312422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.767 [2024-08-13 06:08:33.314593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.767 [2024-08-13 06:08:33.314667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:31.767 pt1 00:13:31.767 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:31.768 malloc2 00:13:31.768 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:32.027 [2024-08-13 06:08:33.719955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:32.027 [2024-08-13 06:08:33.720099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.027 [2024-08-13 06:08:33.720153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.027 [2024-08-13 06:08:33.720181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.027 [2024-08-13 06:08:33.722168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.027 [2024-08-13 06:08:33.722234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:32.027 pt2 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.027 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:32.286 malloc3 00:13:32.286 06:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:32.545 [2024-08-13 06:08:34.159360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:32.545 [2024-08-13 06:08:34.159481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.545 [2024-08-13 06:08:34.159520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.545 [2024-08-13 06:08:34.159548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.545 [2024-08-13 06:08:34.161610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.545 [2024-08-13 06:08:34.161684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:32.545 pt3 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.545 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.546 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:32.805 malloc4 00:13:32.805 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:32.805 [2024-08-13 06:08:34.559257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:32.805 [2024-08-13 06:08:34.559313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.805 [2024-08-13 06:08:34.559330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:32.805 [2024-08-13 06:08:34.559338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.806 [2024-08-13 06:08:34.561418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.806 [2024-08-13 06:08:34.561489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:32.806 pt4 00:13:32.806 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:32.806 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:32.806 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:33.065 [2024-08-13 06:08:34.766921] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.065 [2024-08-13 06:08:34.768669] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.065 [2024-08-13 06:08:34.768737] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:33.066 [2024-08-13 06:08:34.768785] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:33.066 [2024-08-13 06:08:34.768929] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:33.066 [2024-08-13 06:08:34.768943] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:33.066 [2024-08-13 06:08:34.769213] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:33.066 [2024-08-13 06:08:34.769344] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:33.066 [2024-08-13 06:08:34.769361] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:33.066 [2024-08-13 06:08:34.769472] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.066 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.326 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.326 "name": "raid_bdev1", 00:13:33.326 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:33.326 "strip_size_kb": 64, 00:13:33.326 "state": "online", 00:13:33.326 "raid_level": "raid0", 00:13:33.326 "superblock": true, 00:13:33.326 "num_base_bdevs": 4, 00:13:33.326 "num_base_bdevs_discovered": 4, 00:13:33.326 "num_base_bdevs_operational": 4, 00:13:33.326 "base_bdevs_list": [ 00:13:33.326 { 00:13:33.326 "name": "pt1", 00:13:33.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.326 "is_configured": true, 00:13:33.326 "data_offset": 2048, 00:13:33.326 "data_size": 63488 00:13:33.326 }, 00:13:33.326 { 00:13:33.326 "name": "pt2", 00:13:33.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.326 "is_configured": true, 00:13:33.326 "data_offset": 2048, 00:13:33.326 "data_size": 63488 00:13:33.326 }, 00:13:33.326 { 00:13:33.326 "name": "pt3", 00:13:33.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.326 "is_configured": true, 00:13:33.326 "data_offset": 2048, 00:13:33.326 "data_size": 63488 00:13:33.326 }, 00:13:33.326 { 00:13:33.326 "name": "pt4", 00:13:33.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:33.326 "is_configured": true, 00:13:33.326 "data_offset": 2048, 00:13:33.326 "data_size": 63488 00:13:33.326 } 00:13:33.326 ] 00:13:33.326 }' 00:13:33.326 06:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.326 06:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:33.894 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:34.154 [2024-08-13 06:08:35.713632] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.154 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:34.154 "name": "raid_bdev1", 00:13:34.154 "aliases": [ 00:13:34.154 "fd0964a3-e4dd-4117-b12f-bae3895c6eff" 00:13:34.154 ], 00:13:34.154 "product_name": "Raid Volume", 00:13:34.154 "block_size": 512, 00:13:34.154 "num_blocks": 253952, 00:13:34.154 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:34.154 "assigned_rate_limits": { 00:13:34.154 "rw_ios_per_sec": 0, 00:13:34.154 "rw_mbytes_per_sec": 0, 00:13:34.154 "r_mbytes_per_sec": 0, 00:13:34.154 "w_mbytes_per_sec": 0 00:13:34.154 }, 00:13:34.154 "claimed": false, 00:13:34.154 "zoned": false, 00:13:34.154 "supported_io_types": { 00:13:34.154 "read": true, 00:13:34.154 "write": true, 00:13:34.154 "unmap": true, 00:13:34.154 "flush": true, 00:13:34.154 "reset": true, 00:13:34.154 "nvme_admin": false, 00:13:34.154 "nvme_io": false, 00:13:34.154 "nvme_io_md": false, 00:13:34.154 "write_zeroes": true, 00:13:34.154 "zcopy": false, 00:13:34.154 "get_zone_info": false, 00:13:34.154 "zone_management": false, 00:13:34.154 "zone_append": false, 00:13:34.154 "compare": false, 00:13:34.154 "compare_and_write": false, 00:13:34.154 "abort": false, 00:13:34.154 "seek_hole": false, 00:13:34.154 "seek_data": false, 00:13:34.154 "copy": false, 00:13:34.154 "nvme_iov_md": false 00:13:34.154 }, 00:13:34.154 "memory_domains": [ 00:13:34.154 { 00:13:34.154 "dma_device_id": "system", 00:13:34.154 "dma_device_type": 1 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.154 "dma_device_type": 2 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "system", 00:13:34.154 "dma_device_type": 1 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.154 "dma_device_type": 2 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "system", 00:13:34.154 "dma_device_type": 1 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.154 "dma_device_type": 2 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "system", 00:13:34.154 "dma_device_type": 1 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.154 "dma_device_type": 2 00:13:34.154 } 00:13:34.154 ], 00:13:34.154 "driver_specific": { 00:13:34.154 "raid": { 00:13:34.154 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:34.154 "strip_size_kb": 64, 00:13:34.154 "state": "online", 00:13:34.154 "raid_level": "raid0", 00:13:34.154 "superblock": true, 00:13:34.154 "num_base_bdevs": 4, 00:13:34.154 "num_base_bdevs_discovered": 4, 00:13:34.154 "num_base_bdevs_operational": 4, 00:13:34.154 "base_bdevs_list": [ 00:13:34.154 { 00:13:34.154 "name": "pt1", 00:13:34.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.154 "is_configured": true, 00:13:34.154 "data_offset": 2048, 00:13:34.154 "data_size": 63488 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "name": "pt2", 00:13:34.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.154 "is_configured": true, 00:13:34.154 "data_offset": 2048, 00:13:34.154 "data_size": 63488 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "name": "pt3", 00:13:34.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.154 "is_configured": true, 00:13:34.154 "data_offset": 2048, 00:13:34.154 "data_size": 63488 00:13:34.154 }, 00:13:34.154 { 00:13:34.154 "name": "pt4", 00:13:34.154 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.154 "is_configured": true, 00:13:34.154 "data_offset": 2048, 00:13:34.154 "data_size": 63488 00:13:34.154 } 00:13:34.154 ] 00:13:34.154 } 00:13:34.154 } 00:13:34.154 }' 00:13:34.154 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.154 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:34.154 pt2 00:13:34.154 pt3 00:13:34.154 pt4' 00:13:34.154 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.154 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:34.154 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.414 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.414 "name": "pt1", 00:13:34.414 "aliases": [ 00:13:34.414 "00000000-0000-0000-0000-000000000001" 00:13:34.414 ], 00:13:34.414 "product_name": "passthru", 00:13:34.414 "block_size": 512, 00:13:34.414 "num_blocks": 65536, 00:13:34.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.414 "assigned_rate_limits": { 00:13:34.414 "rw_ios_per_sec": 0, 00:13:34.414 "rw_mbytes_per_sec": 0, 00:13:34.414 "r_mbytes_per_sec": 0, 00:13:34.414 "w_mbytes_per_sec": 0 00:13:34.414 }, 00:13:34.414 "claimed": true, 00:13:34.414 "claim_type": "exclusive_write", 00:13:34.414 "zoned": false, 00:13:34.414 "supported_io_types": { 00:13:34.414 "read": true, 00:13:34.414 "write": true, 00:13:34.414 "unmap": true, 00:13:34.414 "flush": true, 00:13:34.414 "reset": true, 00:13:34.414 "nvme_admin": false, 00:13:34.414 "nvme_io": false, 00:13:34.414 "nvme_io_md": false, 00:13:34.414 "write_zeroes": true, 00:13:34.414 "zcopy": true, 00:13:34.414 "get_zone_info": false, 00:13:34.414 "zone_management": false, 00:13:34.414 "zone_append": false, 00:13:34.414 "compare": false, 00:13:34.414 "compare_and_write": false, 00:13:34.414 "abort": true, 00:13:34.414 "seek_hole": false, 00:13:34.414 "seek_data": false, 00:13:34.414 "copy": true, 00:13:34.414 "nvme_iov_md": false 00:13:34.414 }, 00:13:34.414 "memory_domains": [ 00:13:34.414 { 00:13:34.414 "dma_device_id": "system", 00:13:34.414 "dma_device_type": 1 00:13:34.414 }, 00:13:34.414 { 00:13:34.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.414 "dma_device_type": 2 00:13:34.414 } 00:13:34.414 ], 00:13:34.414 "driver_specific": { 00:13:34.414 "passthru": { 00:13:34.414 "name": "pt1", 00:13:34.414 "base_bdev_name": "malloc1" 00:13:34.414 } 00:13:34.414 } 00:13:34.414 }' 00:13:34.414 06:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.414 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.414 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.414 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.414 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.414 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.414 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:34.674 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.934 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.934 "name": "pt2", 00:13:34.934 "aliases": [ 00:13:34.934 "00000000-0000-0000-0000-000000000002" 00:13:34.934 ], 00:13:34.934 "product_name": "passthru", 00:13:34.934 "block_size": 512, 00:13:34.934 "num_blocks": 65536, 00:13:34.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.934 "assigned_rate_limits": { 00:13:34.934 "rw_ios_per_sec": 0, 00:13:34.934 "rw_mbytes_per_sec": 0, 00:13:34.934 "r_mbytes_per_sec": 0, 00:13:34.934 "w_mbytes_per_sec": 0 00:13:34.934 }, 00:13:34.934 "claimed": true, 00:13:34.934 "claim_type": "exclusive_write", 00:13:34.934 "zoned": false, 00:13:34.934 "supported_io_types": { 00:13:34.934 "read": true, 00:13:34.934 "write": true, 00:13:34.934 "unmap": true, 00:13:34.934 "flush": true, 00:13:34.934 "reset": true, 00:13:34.934 "nvme_admin": false, 00:13:34.934 "nvme_io": false, 00:13:34.934 "nvme_io_md": false, 00:13:34.934 "write_zeroes": true, 00:13:34.934 "zcopy": true, 00:13:34.934 "get_zone_info": false, 00:13:34.934 "zone_management": false, 00:13:34.934 "zone_append": false, 00:13:34.934 "compare": false, 00:13:34.934 "compare_and_write": false, 00:13:34.934 "abort": true, 00:13:34.934 "seek_hole": false, 00:13:34.934 "seek_data": false, 00:13:34.934 "copy": true, 00:13:34.934 "nvme_iov_md": false 00:13:34.934 }, 00:13:34.934 "memory_domains": [ 00:13:34.934 { 00:13:34.934 "dma_device_id": "system", 00:13:34.934 "dma_device_type": 1 00:13:34.934 }, 00:13:34.934 { 00:13:34.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.934 "dma_device_type": 2 00:13:34.934 } 00:13:34.934 ], 00:13:34.934 "driver_specific": { 00:13:34.934 "passthru": { 00:13:34.934 "name": "pt2", 00:13:34.934 "base_bdev_name": "malloc2" 00:13:34.934 } 00:13:34.934 } 00:13:34.934 }' 00:13:34.934 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.934 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.934 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.934 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.934 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:35.194 06:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:35.454 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:35.454 "name": "pt3", 00:13:35.454 "aliases": [ 00:13:35.454 "00000000-0000-0000-0000-000000000003" 00:13:35.454 ], 00:13:35.454 "product_name": "passthru", 00:13:35.454 "block_size": 512, 00:13:35.454 "num_blocks": 65536, 00:13:35.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.454 "assigned_rate_limits": { 00:13:35.454 "rw_ios_per_sec": 0, 00:13:35.454 "rw_mbytes_per_sec": 0, 00:13:35.454 "r_mbytes_per_sec": 0, 00:13:35.454 "w_mbytes_per_sec": 0 00:13:35.454 }, 00:13:35.454 "claimed": true, 00:13:35.454 "claim_type": "exclusive_write", 00:13:35.454 "zoned": false, 00:13:35.454 "supported_io_types": { 00:13:35.454 "read": true, 00:13:35.454 "write": true, 00:13:35.454 "unmap": true, 00:13:35.454 "flush": true, 00:13:35.454 "reset": true, 00:13:35.454 "nvme_admin": false, 00:13:35.454 "nvme_io": false, 00:13:35.454 "nvme_io_md": false, 00:13:35.454 "write_zeroes": true, 00:13:35.454 "zcopy": true, 00:13:35.454 "get_zone_info": false, 00:13:35.454 "zone_management": false, 00:13:35.454 "zone_append": false, 00:13:35.454 "compare": false, 00:13:35.454 "compare_and_write": false, 00:13:35.454 "abort": true, 00:13:35.454 "seek_hole": false, 00:13:35.454 "seek_data": false, 00:13:35.454 "copy": true, 00:13:35.454 "nvme_iov_md": false 00:13:35.454 }, 00:13:35.454 "memory_domains": [ 00:13:35.454 { 00:13:35.454 "dma_device_id": "system", 00:13:35.454 "dma_device_type": 1 00:13:35.454 }, 00:13:35.454 { 00:13:35.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.454 "dma_device_type": 2 00:13:35.454 } 00:13:35.454 ], 00:13:35.454 "driver_specific": { 00:13:35.454 "passthru": { 00:13:35.454 "name": "pt3", 00:13:35.454 "base_bdev_name": "malloc3" 00:13:35.454 } 00:13:35.454 } 00:13:35.454 }' 00:13:35.454 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.454 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.454 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:35.454 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.454 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.714 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:35.715 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:35.974 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:35.974 "name": "pt4", 00:13:35.974 "aliases": [ 00:13:35.974 "00000000-0000-0000-0000-000000000004" 00:13:35.974 ], 00:13:35.974 "product_name": "passthru", 00:13:35.974 "block_size": 512, 00:13:35.975 "num_blocks": 65536, 00:13:35.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.975 "assigned_rate_limits": { 00:13:35.975 "rw_ios_per_sec": 0, 00:13:35.975 "rw_mbytes_per_sec": 0, 00:13:35.975 "r_mbytes_per_sec": 0, 00:13:35.975 "w_mbytes_per_sec": 0 00:13:35.975 }, 00:13:35.975 "claimed": true, 00:13:35.975 "claim_type": "exclusive_write", 00:13:35.975 "zoned": false, 00:13:35.975 "supported_io_types": { 00:13:35.975 "read": true, 00:13:35.975 "write": true, 00:13:35.975 "unmap": true, 00:13:35.975 "flush": true, 00:13:35.975 "reset": true, 00:13:35.975 "nvme_admin": false, 00:13:35.975 "nvme_io": false, 00:13:35.975 "nvme_io_md": false, 00:13:35.975 "write_zeroes": true, 00:13:35.975 "zcopy": true, 00:13:35.975 "get_zone_info": false, 00:13:35.975 "zone_management": false, 00:13:35.975 "zone_append": false, 00:13:35.975 "compare": false, 00:13:35.975 "compare_and_write": false, 00:13:35.975 "abort": true, 00:13:35.975 "seek_hole": false, 00:13:35.975 "seek_data": false, 00:13:35.975 "copy": true, 00:13:35.975 "nvme_iov_md": false 00:13:35.975 }, 00:13:35.975 "memory_domains": [ 00:13:35.975 { 00:13:35.975 "dma_device_id": "system", 00:13:35.975 "dma_device_type": 1 00:13:35.975 }, 00:13:35.975 { 00:13:35.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.975 "dma_device_type": 2 00:13:35.975 } 00:13:35.975 ], 00:13:35.975 "driver_specific": { 00:13:35.975 "passthru": { 00:13:35.975 "name": "pt4", 00:13:35.975 "base_bdev_name": "malloc4" 00:13:35.975 } 00:13:35.975 } 00:13:35.975 }' 00:13:35.975 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.975 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.975 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:35.975 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.234 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.235 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.235 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:36.235 06:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:13:36.494 [2024-08-13 06:08:38.157198] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.495 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=fd0964a3-e4dd-4117-b12f-bae3895c6eff 00:13:36.495 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z fd0964a3-e4dd-4117-b12f-bae3895c6eff ']' 00:13:36.495 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:36.755 [2024-08-13 06:08:38.352577] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.755 [2024-08-13 06:08:38.352618] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.755 [2024-08-13 06:08:38.352706] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.755 [2024-08-13 06:08:38.352771] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.755 [2024-08-13 06:08:38.352791] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:36.755 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.755 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:13:37.014 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:13:37.014 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:13:37.015 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.015 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:37.015 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.015 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:37.274 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.274 06:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:37.534 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.534 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:37.794 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:38.054 [2024-08-13 06:08:39.730218] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:38.054 [2024-08-13 06:08:39.731939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:38.054 [2024-08-13 06:08:39.732024] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:38.054 [2024-08-13 06:08:39.732080] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:38.054 [2024-08-13 06:08:39.732148] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:38.054 [2024-08-13 06:08:39.732229] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:38.054 [2024-08-13 06:08:39.732314] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:38.054 [2024-08-13 06:08:39.732392] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:38.054 [2024-08-13 06:08:39.732434] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.054 [2024-08-13 06:08:39.732490] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:38.054 request: 00:13:38.054 { 00:13:38.054 "name": "raid_bdev1", 00:13:38.054 "raid_level": "raid0", 00:13:38.054 "base_bdevs": [ 00:13:38.054 "malloc1", 00:13:38.054 "malloc2", 00:13:38.054 "malloc3", 00:13:38.054 "malloc4" 00:13:38.054 ], 00:13:38.054 "strip_size_kb": 64, 00:13:38.054 "superblock": false, 00:13:38.054 "method": "bdev_raid_create", 00:13:38.054 "req_id": 1 00:13:38.054 } 00:13:38.054 Got JSON-RPC error response 00:13:38.054 response: 00:13:38.054 { 00:13:38.054 "code": -17, 00:13:38.054 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:38.054 } 00:13:38.054 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:13:38.054 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:38.054 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:38.054 06:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:38.054 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.054 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:13:38.313 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:13:38.313 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:13:38.313 06:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.573 [2024-08-13 06:08:40.117490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.573 [2024-08-13 06:08:40.117622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.573 [2024-08-13 06:08:40.117656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:38.573 [2024-08-13 06:08:40.117689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.573 [2024-08-13 06:08:40.119800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.573 [2024-08-13 06:08:40.119873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.573 [2024-08-13 06:08:40.119985] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:38.573 [2024-08-13 06:08:40.120060] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:38.573 pt1 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.573 "name": "raid_bdev1", 00:13:38.573 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:38.573 "strip_size_kb": 64, 00:13:38.573 "state": "configuring", 00:13:38.573 "raid_level": "raid0", 00:13:38.573 "superblock": true, 00:13:38.573 "num_base_bdevs": 4, 00:13:38.573 "num_base_bdevs_discovered": 1, 00:13:38.573 "num_base_bdevs_operational": 4, 00:13:38.573 "base_bdevs_list": [ 00:13:38.573 { 00:13:38.573 "name": "pt1", 00:13:38.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.573 "is_configured": true, 00:13:38.573 "data_offset": 2048, 00:13:38.573 "data_size": 63488 00:13:38.573 }, 00:13:38.573 { 00:13:38.573 "name": null, 00:13:38.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.573 "is_configured": false, 00:13:38.573 "data_offset": 2048, 00:13:38.573 "data_size": 63488 00:13:38.573 }, 00:13:38.573 { 00:13:38.573 "name": null, 00:13:38.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.573 "is_configured": false, 00:13:38.573 "data_offset": 2048, 00:13:38.573 "data_size": 63488 00:13:38.573 }, 00:13:38.573 { 00:13:38.573 "name": null, 00:13:38.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.573 "is_configured": false, 00:13:38.573 "data_offset": 2048, 00:13:38.573 "data_size": 63488 00:13:38.573 } 00:13:38.573 ] 00:13:38.573 }' 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.573 06:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.142 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:13:39.142 06:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.401 [2024-08-13 06:08:41.048179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.401 [2024-08-13 06:08:41.048328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.401 [2024-08-13 06:08:41.048362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:39.401 [2024-08-13 06:08:41.048389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.401 [2024-08-13 06:08:41.048794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.402 [2024-08-13 06:08:41.048851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.402 [2024-08-13 06:08:41.048953] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:39.402 [2024-08-13 06:08:41.049003] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.402 pt2 00:13:39.402 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:39.661 [2024-08-13 06:08:41.239915] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.661 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.921 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.921 "name": "raid_bdev1", 00:13:39.921 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:39.921 "strip_size_kb": 64, 00:13:39.921 "state": "configuring", 00:13:39.921 "raid_level": "raid0", 00:13:39.921 "superblock": true, 00:13:39.921 "num_base_bdevs": 4, 00:13:39.921 "num_base_bdevs_discovered": 1, 00:13:39.921 "num_base_bdevs_operational": 4, 00:13:39.921 "base_bdevs_list": [ 00:13:39.921 { 00:13:39.921 "name": "pt1", 00:13:39.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.921 "is_configured": true, 00:13:39.921 "data_offset": 2048, 00:13:39.921 "data_size": 63488 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "name": null, 00:13:39.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.921 "is_configured": false, 00:13:39.921 "data_offset": 2048, 00:13:39.921 "data_size": 63488 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "name": null, 00:13:39.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.921 "is_configured": false, 00:13:39.921 "data_offset": 2048, 00:13:39.921 "data_size": 63488 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "name": null, 00:13:39.921 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.921 "is_configured": false, 00:13:39.921 "data_offset": 2048, 00:13:39.921 "data_size": 63488 00:13:39.921 } 00:13:39.921 ] 00:13:39.921 }' 00:13:39.921 06:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.921 06:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.491 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:13:40.491 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:40.491 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.491 [2024-08-13 06:08:42.230206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.491 [2024-08-13 06:08:42.230305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.491 [2024-08-13 06:08:42.230342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:40.491 [2024-08-13 06:08:42.230368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.491 [2024-08-13 06:08:42.230776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.491 [2024-08-13 06:08:42.230828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.491 [2024-08-13 06:08:42.230918] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.491 [2024-08-13 06:08:42.230962] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.491 pt2 00:13:40.491 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:13:40.491 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:40.491 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:40.751 [2024-08-13 06:08:42.421846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:40.751 [2024-08-13 06:08:42.421931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.751 [2024-08-13 06:08:42.421966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:40.751 [2024-08-13 06:08:42.422000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.751 [2024-08-13 06:08:42.422383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.751 [2024-08-13 06:08:42.422434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:40.751 [2024-08-13 06:08:42.422516] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:40.751 [2024-08-13 06:08:42.422559] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.751 pt3 00:13:40.751 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:13:40.751 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:40.751 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:41.011 [2024-08-13 06:08:42.621527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:41.011 [2024-08-13 06:08:42.621606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.011 [2024-08-13 06:08:42.621627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:41.011 [2024-08-13 06:08:42.621635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.011 [2024-08-13 06:08:42.621931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.011 [2024-08-13 06:08:42.621946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:41.011 [2024-08-13 06:08:42.621994] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:41.011 [2024-08-13 06:08:42.622010] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:41.011 [2024-08-13 06:08:42.622125] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:41.011 [2024-08-13 06:08:42.622134] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:41.011 [2024-08-13 06:08:42.622353] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:41.011 [2024-08-13 06:08:42.622466] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:41.011 [2024-08-13 06:08:42.622484] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:41.011 [2024-08-13 06:08:42.622566] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.011 pt4 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.011 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.271 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.271 "name": "raid_bdev1", 00:13:41.271 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:41.271 "strip_size_kb": 64, 00:13:41.271 "state": "online", 00:13:41.271 "raid_level": "raid0", 00:13:41.271 "superblock": true, 00:13:41.271 "num_base_bdevs": 4, 00:13:41.271 "num_base_bdevs_discovered": 4, 00:13:41.271 "num_base_bdevs_operational": 4, 00:13:41.271 "base_bdevs_list": [ 00:13:41.271 { 00:13:41.271 "name": "pt1", 00:13:41.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.271 "is_configured": true, 00:13:41.271 "data_offset": 2048, 00:13:41.271 "data_size": 63488 00:13:41.271 }, 00:13:41.271 { 00:13:41.271 "name": "pt2", 00:13:41.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.271 "is_configured": true, 00:13:41.271 "data_offset": 2048, 00:13:41.271 "data_size": 63488 00:13:41.271 }, 00:13:41.271 { 00:13:41.271 "name": "pt3", 00:13:41.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.271 "is_configured": true, 00:13:41.271 "data_offset": 2048, 00:13:41.271 "data_size": 63488 00:13:41.271 }, 00:13:41.271 { 00:13:41.271 "name": "pt4", 00:13:41.271 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.271 "is_configured": true, 00:13:41.271 "data_offset": 2048, 00:13:41.271 "data_size": 63488 00:13:41.271 } 00:13:41.271 ] 00:13:41.271 }' 00:13:41.271 06:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.271 06:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:41.840 [2024-08-13 06:08:43.588200] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.840 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:41.840 "name": "raid_bdev1", 00:13:41.840 "aliases": [ 00:13:41.840 "fd0964a3-e4dd-4117-b12f-bae3895c6eff" 00:13:41.840 ], 00:13:41.840 "product_name": "Raid Volume", 00:13:41.840 "block_size": 512, 00:13:41.840 "num_blocks": 253952, 00:13:41.840 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:41.840 "assigned_rate_limits": { 00:13:41.840 "rw_ios_per_sec": 0, 00:13:41.840 "rw_mbytes_per_sec": 0, 00:13:41.840 "r_mbytes_per_sec": 0, 00:13:41.840 "w_mbytes_per_sec": 0 00:13:41.840 }, 00:13:41.840 "claimed": false, 00:13:41.840 "zoned": false, 00:13:41.840 "supported_io_types": { 00:13:41.841 "read": true, 00:13:41.841 "write": true, 00:13:41.841 "unmap": true, 00:13:41.841 "flush": true, 00:13:41.841 "reset": true, 00:13:41.841 "nvme_admin": false, 00:13:41.841 "nvme_io": false, 00:13:41.841 "nvme_io_md": false, 00:13:41.841 "write_zeroes": true, 00:13:41.841 "zcopy": false, 00:13:41.841 "get_zone_info": false, 00:13:41.841 "zone_management": false, 00:13:41.841 "zone_append": false, 00:13:41.841 "compare": false, 00:13:41.841 "compare_and_write": false, 00:13:41.841 "abort": false, 00:13:41.841 "seek_hole": false, 00:13:41.841 "seek_data": false, 00:13:41.841 "copy": false, 00:13:41.841 "nvme_iov_md": false 00:13:41.841 }, 00:13:41.841 "memory_domains": [ 00:13:41.841 { 00:13:41.841 "dma_device_id": "system", 00:13:41.841 "dma_device_type": 1 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.841 "dma_device_type": 2 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "system", 00:13:41.841 "dma_device_type": 1 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.841 "dma_device_type": 2 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "system", 00:13:41.841 "dma_device_type": 1 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.841 "dma_device_type": 2 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "system", 00:13:41.841 "dma_device_type": 1 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.841 "dma_device_type": 2 00:13:41.841 } 00:13:41.841 ], 00:13:41.841 "driver_specific": { 00:13:41.841 "raid": { 00:13:41.841 "uuid": "fd0964a3-e4dd-4117-b12f-bae3895c6eff", 00:13:41.841 "strip_size_kb": 64, 00:13:41.841 "state": "online", 00:13:41.841 "raid_level": "raid0", 00:13:41.841 "superblock": true, 00:13:41.841 "num_base_bdevs": 4, 00:13:41.841 "num_base_bdevs_discovered": 4, 00:13:41.841 "num_base_bdevs_operational": 4, 00:13:41.841 "base_bdevs_list": [ 00:13:41.841 { 00:13:41.841 "name": "pt1", 00:13:41.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.841 "is_configured": true, 00:13:41.841 "data_offset": 2048, 00:13:41.841 "data_size": 63488 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "name": "pt2", 00:13:41.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.841 "is_configured": true, 00:13:41.841 "data_offset": 2048, 00:13:41.841 "data_size": 63488 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "name": "pt3", 00:13:41.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.841 "is_configured": true, 00:13:41.841 "data_offset": 2048, 00:13:41.841 "data_size": 63488 00:13:41.841 }, 00:13:41.841 { 00:13:41.841 "name": "pt4", 00:13:41.841 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.841 "is_configured": true, 00:13:41.841 "data_offset": 2048, 00:13:41.841 "data_size": 63488 00:13:41.841 } 00:13:41.841 ] 00:13:41.841 } 00:13:41.841 } 00:13:41.841 }' 00:13:41.841 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.101 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:42.101 pt2 00:13:42.101 pt3 00:13:42.101 pt4' 00:13:42.101 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.101 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:42.101 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:42.101 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:42.101 "name": "pt1", 00:13:42.101 "aliases": [ 00:13:42.101 "00000000-0000-0000-0000-000000000001" 00:13:42.101 ], 00:13:42.101 "product_name": "passthru", 00:13:42.101 "block_size": 512, 00:13:42.101 "num_blocks": 65536, 00:13:42.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.101 "assigned_rate_limits": { 00:13:42.101 "rw_ios_per_sec": 0, 00:13:42.101 "rw_mbytes_per_sec": 0, 00:13:42.101 "r_mbytes_per_sec": 0, 00:13:42.101 "w_mbytes_per_sec": 0 00:13:42.101 }, 00:13:42.101 "claimed": true, 00:13:42.101 "claim_type": "exclusive_write", 00:13:42.101 "zoned": false, 00:13:42.101 "supported_io_types": { 00:13:42.101 "read": true, 00:13:42.101 "write": true, 00:13:42.101 "unmap": true, 00:13:42.101 "flush": true, 00:13:42.101 "reset": true, 00:13:42.101 "nvme_admin": false, 00:13:42.101 "nvme_io": false, 00:13:42.101 "nvme_io_md": false, 00:13:42.101 "write_zeroes": true, 00:13:42.101 "zcopy": true, 00:13:42.101 "get_zone_info": false, 00:13:42.101 "zone_management": false, 00:13:42.101 "zone_append": false, 00:13:42.101 "compare": false, 00:13:42.101 "compare_and_write": false, 00:13:42.101 "abort": true, 00:13:42.101 "seek_hole": false, 00:13:42.101 "seek_data": false, 00:13:42.101 "copy": true, 00:13:42.101 "nvme_iov_md": false 00:13:42.101 }, 00:13:42.101 "memory_domains": [ 00:13:42.101 { 00:13:42.101 "dma_device_id": "system", 00:13:42.101 "dma_device_type": 1 00:13:42.101 }, 00:13:42.101 { 00:13:42.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.101 "dma_device_type": 2 00:13:42.101 } 00:13:42.101 ], 00:13:42.101 "driver_specific": { 00:13:42.101 "passthru": { 00:13:42.101 "name": "pt1", 00:13:42.101 "base_bdev_name": "malloc1" 00:13:42.101 } 00:13:42.101 } 00:13:42.101 }' 00:13:42.101 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.360 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.360 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.360 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.360 06:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.360 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.360 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.360 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.360 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.360 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:42.620 "name": "pt2", 00:13:42.620 "aliases": [ 00:13:42.620 "00000000-0000-0000-0000-000000000002" 00:13:42.620 ], 00:13:42.620 "product_name": "passthru", 00:13:42.620 "block_size": 512, 00:13:42.620 "num_blocks": 65536, 00:13:42.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.620 "assigned_rate_limits": { 00:13:42.620 "rw_ios_per_sec": 0, 00:13:42.620 "rw_mbytes_per_sec": 0, 00:13:42.620 "r_mbytes_per_sec": 0, 00:13:42.620 "w_mbytes_per_sec": 0 00:13:42.620 }, 00:13:42.620 "claimed": true, 00:13:42.620 "claim_type": "exclusive_write", 00:13:42.620 "zoned": false, 00:13:42.620 "supported_io_types": { 00:13:42.620 "read": true, 00:13:42.620 "write": true, 00:13:42.620 "unmap": true, 00:13:42.620 "flush": true, 00:13:42.620 "reset": true, 00:13:42.620 "nvme_admin": false, 00:13:42.620 "nvme_io": false, 00:13:42.620 "nvme_io_md": false, 00:13:42.620 "write_zeroes": true, 00:13:42.620 "zcopy": true, 00:13:42.620 "get_zone_info": false, 00:13:42.620 "zone_management": false, 00:13:42.620 "zone_append": false, 00:13:42.620 "compare": false, 00:13:42.620 "compare_and_write": false, 00:13:42.620 "abort": true, 00:13:42.620 "seek_hole": false, 00:13:42.620 "seek_data": false, 00:13:42.620 "copy": true, 00:13:42.620 "nvme_iov_md": false 00:13:42.620 }, 00:13:42.620 "memory_domains": [ 00:13:42.620 { 00:13:42.620 "dma_device_id": "system", 00:13:42.620 "dma_device_type": 1 00:13:42.620 }, 00:13:42.620 { 00:13:42.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.620 "dma_device_type": 2 00:13:42.620 } 00:13:42.620 ], 00:13:42.620 "driver_specific": { 00:13:42.620 "passthru": { 00:13:42.620 "name": "pt2", 00:13:42.620 "base_bdev_name": "malloc2" 00:13:42.620 } 00:13:42.620 } 00:13:42.620 }' 00:13:42.620 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.880 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.140 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.140 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.140 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.140 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:43.140 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.399 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.399 "name": "pt3", 00:13:43.399 "aliases": [ 00:13:43.399 "00000000-0000-0000-0000-000000000003" 00:13:43.399 ], 00:13:43.399 "product_name": "passthru", 00:13:43.399 "block_size": 512, 00:13:43.399 "num_blocks": 65536, 00:13:43.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.399 "assigned_rate_limits": { 00:13:43.399 "rw_ios_per_sec": 0, 00:13:43.399 "rw_mbytes_per_sec": 0, 00:13:43.399 "r_mbytes_per_sec": 0, 00:13:43.399 "w_mbytes_per_sec": 0 00:13:43.399 }, 00:13:43.399 "claimed": true, 00:13:43.399 "claim_type": "exclusive_write", 00:13:43.399 "zoned": false, 00:13:43.399 "supported_io_types": { 00:13:43.399 "read": true, 00:13:43.399 "write": true, 00:13:43.399 "unmap": true, 00:13:43.399 "flush": true, 00:13:43.399 "reset": true, 00:13:43.399 "nvme_admin": false, 00:13:43.399 "nvme_io": false, 00:13:43.399 "nvme_io_md": false, 00:13:43.399 "write_zeroes": true, 00:13:43.399 "zcopy": true, 00:13:43.399 "get_zone_info": false, 00:13:43.399 "zone_management": false, 00:13:43.399 "zone_append": false, 00:13:43.399 "compare": false, 00:13:43.399 "compare_and_write": false, 00:13:43.399 "abort": true, 00:13:43.399 "seek_hole": false, 00:13:43.399 "seek_data": false, 00:13:43.399 "copy": true, 00:13:43.400 "nvme_iov_md": false 00:13:43.400 }, 00:13:43.400 "memory_domains": [ 00:13:43.400 { 00:13:43.400 "dma_device_id": "system", 00:13:43.400 "dma_device_type": 1 00:13:43.400 }, 00:13:43.400 { 00:13:43.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.400 "dma_device_type": 2 00:13:43.400 } 00:13:43.400 ], 00:13:43.400 "driver_specific": { 00:13:43.400 "passthru": { 00:13:43.400 "name": "pt3", 00:13:43.400 "base_bdev_name": "malloc3" 00:13:43.400 } 00:13:43.400 } 00:13:43.400 }' 00:13:43.400 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.400 06:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.400 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.400 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.400 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.400 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.400 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.400 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.659 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.919 "name": "pt4", 00:13:43.919 "aliases": [ 00:13:43.919 "00000000-0000-0000-0000-000000000004" 00:13:43.919 ], 00:13:43.919 "product_name": "passthru", 00:13:43.919 "block_size": 512, 00:13:43.919 "num_blocks": 65536, 00:13:43.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.919 "assigned_rate_limits": { 00:13:43.919 "rw_ios_per_sec": 0, 00:13:43.919 "rw_mbytes_per_sec": 0, 00:13:43.919 "r_mbytes_per_sec": 0, 00:13:43.919 "w_mbytes_per_sec": 0 00:13:43.919 }, 00:13:43.919 "claimed": true, 00:13:43.919 "claim_type": "exclusive_write", 00:13:43.919 "zoned": false, 00:13:43.919 "supported_io_types": { 00:13:43.919 "read": true, 00:13:43.919 "write": true, 00:13:43.919 "unmap": true, 00:13:43.919 "flush": true, 00:13:43.919 "reset": true, 00:13:43.919 "nvme_admin": false, 00:13:43.919 "nvme_io": false, 00:13:43.919 "nvme_io_md": false, 00:13:43.919 "write_zeroes": true, 00:13:43.919 "zcopy": true, 00:13:43.919 "get_zone_info": false, 00:13:43.919 "zone_management": false, 00:13:43.919 "zone_append": false, 00:13:43.919 "compare": false, 00:13:43.919 "compare_and_write": false, 00:13:43.919 "abort": true, 00:13:43.919 "seek_hole": false, 00:13:43.919 "seek_data": false, 00:13:43.919 "copy": true, 00:13:43.919 "nvme_iov_md": false 00:13:43.919 }, 00:13:43.919 "memory_domains": [ 00:13:43.919 { 00:13:43.919 "dma_device_id": "system", 00:13:43.919 "dma_device_type": 1 00:13:43.919 }, 00:13:43.919 { 00:13:43.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.919 "dma_device_type": 2 00:13:43.919 } 00:13:43.919 ], 00:13:43.919 "driver_specific": { 00:13:43.919 "passthru": { 00:13:43.919 "name": "pt4", 00:13:43.919 "base_bdev_name": "malloc4" 00:13:43.919 } 00:13:43.919 } 00:13:43.919 }' 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.919 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:44.179 06:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:13:44.439 [2024-08-13 06:08:46.016137] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' fd0964a3-e4dd-4117-b12f-bae3895c6eff '!=' fd0964a3-e4dd-4117-b12f-bae3895c6eff ']' 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 85188 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 85188 ']' 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 85188 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85188 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85188' 00:13:44.439 killing process with pid 85188 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 85188 00:13:44.439 [2024-08-13 06:08:46.067407] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.439 [2024-08-13 06:08:46.067547] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.439 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 85188 00:13:44.439 [2024-08-13 06:08:46.067646] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.439 [2024-08-13 06:08:46.067660] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:44.439 [2024-08-13 06:08:46.111134] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.699 06:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:13:44.699 00:13:44.700 real 0m14.322s 00:13:44.700 user 0m25.940s 00:13:44.700 sys 0m2.311s 00:13:44.700 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.700 06:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.700 ************************************ 00:13:44.700 END TEST raid_superblock_test 00:13:44.700 ************************************ 00:13:44.700 06:08:46 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:44.700 06:08:46 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:44.700 06:08:46 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.700 06:08:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.700 ************************************ 00:13:44.700 START TEST raid_read_error_test 00:13:44.700 ************************************ 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 read 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.X7KErjzW2c 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=85687 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 85687 /var/tmp/spdk-raid.sock 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 85687 ']' 00:13:44.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:44.700 06:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.960 [2024-08-13 06:08:46.531778] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:13:44.960 [2024-08-13 06:08:46.532404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85687 ] 00:13:44.960 [2024-08-13 06:08:46.675578] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.960 [2024-08-13 06:08:46.720600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.220 [2024-08-13 06:08:46.763429] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.220 [2024-08-13 06:08:46.763551] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.789 06:08:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:45.789 06:08:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:13:45.789 06:08:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:45.789 06:08:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.789 BaseBdev1_malloc 00:13:45.789 06:08:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:46.049 true 00:13:46.049 06:08:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:46.309 [2024-08-13 06:08:47.931027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:46.309 [2024-08-13 06:08:47.931203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.309 [2024-08-13 06:08:47.931250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:13:46.309 [2024-08-13 06:08:47.931283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.309 [2024-08-13 06:08:47.933443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.309 [2024-08-13 06:08:47.933526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.309 BaseBdev1 00:13:46.309 06:08:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:46.309 06:08:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.569 BaseBdev2_malloc 00:13:46.569 06:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:46.569 true 00:13:46.569 06:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:46.829 [2024-08-13 06:08:48.530662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:46.829 [2024-08-13 06:08:48.530802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.829 [2024-08-13 06:08:48.530850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:46.829 [2024-08-13 06:08:48.530882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.829 [2024-08-13 06:08:48.532931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.829 [2024-08-13 06:08:48.533002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.829 BaseBdev2 00:13:46.829 06:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:46.829 06:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.089 BaseBdev3_malloc 00:13:47.089 06:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:47.348 true 00:13:47.348 06:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:47.348 [2024-08-13 06:08:49.109980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:47.348 [2024-08-13 06:08:49.110106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.348 [2024-08-13 06:08:49.110143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:13:47.348 [2024-08-13 06:08:49.110154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.348 [2024-08-13 06:08:49.112088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.348 [2024-08-13 06:08:49.112124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.348 BaseBdev3 00:13:47.608 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:47.608 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.608 BaseBdev4_malloc 00:13:47.608 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:47.868 true 00:13:47.868 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:47.868 [2024-08-13 06:08:49.649307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:47.868 [2024-08-13 06:08:49.649402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.868 [2024-08-13 06:08:49.649438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:47.868 [2024-08-13 06:08:49.649450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.868 [2024-08-13 06:08:49.651367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.868 [2024-08-13 06:08:49.651407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.868 BaseBdev4 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:48.128 [2024-08-13 06:08:49.841196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.128 [2024-08-13 06:08:49.842843] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.128 [2024-08-13 06:08:49.842916] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.128 [2024-08-13 06:08:49.842973] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.128 [2024-08-13 06:08:49.843159] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:13:48.128 [2024-08-13 06:08:49.843173] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:48.128 [2024-08-13 06:08:49.843393] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:48.128 [2024-08-13 06:08:49.843511] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:13:48.128 [2024-08-13 06:08:49.843524] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:13:48.128 [2024-08-13 06:08:49.843641] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.128 06:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.387 06:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.387 "name": "raid_bdev1", 00:13:48.387 "uuid": "f0521010-41af-4f2f-b61c-80b904f3087e", 00:13:48.387 "strip_size_kb": 64, 00:13:48.387 "state": "online", 00:13:48.387 "raid_level": "raid0", 00:13:48.387 "superblock": true, 00:13:48.387 "num_base_bdevs": 4, 00:13:48.387 "num_base_bdevs_discovered": 4, 00:13:48.387 "num_base_bdevs_operational": 4, 00:13:48.387 "base_bdevs_list": [ 00:13:48.387 { 00:13:48.387 "name": "BaseBdev1", 00:13:48.387 "uuid": "9e135d6c-d48b-52da-ba4b-d51ef9b0536a", 00:13:48.387 "is_configured": true, 00:13:48.387 "data_offset": 2048, 00:13:48.387 "data_size": 63488 00:13:48.387 }, 00:13:48.387 { 00:13:48.387 "name": "BaseBdev2", 00:13:48.387 "uuid": "d17c82ea-9d1c-5778-bdfd-a75b1601c4c8", 00:13:48.387 "is_configured": true, 00:13:48.387 "data_offset": 2048, 00:13:48.387 "data_size": 63488 00:13:48.387 }, 00:13:48.387 { 00:13:48.387 "name": "BaseBdev3", 00:13:48.387 "uuid": "87e1d0fb-cf33-5b87-a857-245049426b67", 00:13:48.387 "is_configured": true, 00:13:48.387 "data_offset": 2048, 00:13:48.387 "data_size": 63488 00:13:48.387 }, 00:13:48.387 { 00:13:48.387 "name": "BaseBdev4", 00:13:48.387 "uuid": "260a9efa-934e-5ccd-8f26-4f1bd866c398", 00:13:48.387 "is_configured": true, 00:13:48.387 "data_offset": 2048, 00:13:48.387 "data_size": 63488 00:13:48.387 } 00:13:48.387 ] 00:13:48.387 }' 00:13:48.387 06:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.387 06:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.956 06:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:48.956 06:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:48.956 [2024-08-13 06:08:50.696166] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:49.903 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.175 06:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.437 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.437 "name": "raid_bdev1", 00:13:50.437 "uuid": "f0521010-41af-4f2f-b61c-80b904f3087e", 00:13:50.437 "strip_size_kb": 64, 00:13:50.437 "state": "online", 00:13:50.437 "raid_level": "raid0", 00:13:50.437 "superblock": true, 00:13:50.437 "num_base_bdevs": 4, 00:13:50.437 "num_base_bdevs_discovered": 4, 00:13:50.437 "num_base_bdevs_operational": 4, 00:13:50.437 "base_bdevs_list": [ 00:13:50.437 { 00:13:50.437 "name": "BaseBdev1", 00:13:50.437 "uuid": "9e135d6c-d48b-52da-ba4b-d51ef9b0536a", 00:13:50.437 "is_configured": true, 00:13:50.437 "data_offset": 2048, 00:13:50.438 "data_size": 63488 00:13:50.438 }, 00:13:50.438 { 00:13:50.438 "name": "BaseBdev2", 00:13:50.438 "uuid": "d17c82ea-9d1c-5778-bdfd-a75b1601c4c8", 00:13:50.438 "is_configured": true, 00:13:50.438 "data_offset": 2048, 00:13:50.438 "data_size": 63488 00:13:50.438 }, 00:13:50.438 { 00:13:50.438 "name": "BaseBdev3", 00:13:50.438 "uuid": "87e1d0fb-cf33-5b87-a857-245049426b67", 00:13:50.438 "is_configured": true, 00:13:50.438 "data_offset": 2048, 00:13:50.438 "data_size": 63488 00:13:50.438 }, 00:13:50.438 { 00:13:50.438 "name": "BaseBdev4", 00:13:50.438 "uuid": "260a9efa-934e-5ccd-8f26-4f1bd866c398", 00:13:50.438 "is_configured": true, 00:13:50.438 "data_offset": 2048, 00:13:50.438 "data_size": 63488 00:13:50.438 } 00:13:50.438 ] 00:13:50.438 }' 00:13:50.438 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.438 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.007 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:51.007 [2024-08-13 06:08:52.673872] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.007 [2024-08-13 06:08:52.674016] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.007 [2024-08-13 06:08:52.676154] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.007 [2024-08-13 06:08:52.676247] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.007 [2024-08-13 06:08:52.676304] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.007 [2024-08-13 06:08:52.676343] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:13:51.007 0 00:13:51.007 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 85687 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 85687 ']' 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 85687 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85687 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85687' 00:13:51.008 killing process with pid 85687 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 85687 00:13:51.008 [2024-08-13 06:08:52.720794] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.008 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 85687 00:13:51.008 [2024-08-13 06:08:52.755443] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.X7KErjzW2c 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.51 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.51 != \0\.\0\0 ]] 00:13:51.267 00:13:51.267 real 0m6.574s 00:13:51.267 user 0m10.303s 00:13:51.267 sys 0m1.007s 00:13:51.267 ************************************ 00:13:51.267 END TEST raid_read_error_test 00:13:51.267 ************************************ 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.267 06:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.527 06:08:53 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:51.527 06:08:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:51.527 06:08:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.527 06:08:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.527 ************************************ 00:13:51.527 START TEST raid_write_error_test 00:13:51.527 ************************************ 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 write 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:51.527 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.dmkHe1xPVv 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=85864 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 85864 /var/tmp/spdk-raid.sock 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 85864 ']' 00:13:51.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.528 06:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.528 [2024-08-13 06:08:53.190135] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:13:51.528 [2024-08-13 06:08:53.190270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85864 ] 00:13:51.787 [2024-08-13 06:08:53.337711] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.787 [2024-08-13 06:08:53.383873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.787 [2024-08-13 06:08:53.425991] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.787 [2024-08-13 06:08:53.426045] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.363 06:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.363 06:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:13:52.363 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:52.363 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.622 BaseBdev1_malloc 00:13:52.622 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:52.622 true 00:13:52.622 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:52.882 [2024-08-13 06:08:54.569284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:52.882 [2024-08-13 06:08:54.569435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.882 [2024-08-13 06:08:54.569468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:13:52.882 [2024-08-13 06:08:54.569484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.882 [2024-08-13 06:08:54.571582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.882 [2024-08-13 06:08:54.571626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.882 BaseBdev1 00:13:52.882 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:52.882 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.141 BaseBdev2_malloc 00:13:53.141 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:53.401 true 00:13:53.401 06:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.401 [2024-08-13 06:08:55.161268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.401 [2024-08-13 06:08:55.161433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.401 [2024-08-13 06:08:55.161459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:53.401 [2024-08-13 06:08:55.161469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.401 [2024-08-13 06:08:55.163491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.401 [2024-08-13 06:08:55.163529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.401 BaseBdev2 00:13:53.401 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:53.401 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.660 BaseBdev3_malloc 00:13:53.660 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:53.919 true 00:13:53.919 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:54.179 [2024-08-13 06:08:55.750908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:54.179 [2024-08-13 06:08:55.750965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.179 [2024-08-13 06:08:55.750981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:13:54.179 [2024-08-13 06:08:55.750990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.179 [2024-08-13 06:08:55.752872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.179 [2024-08-13 06:08:55.752915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:54.179 BaseBdev3 00:13:54.179 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:54.179 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:54.179 BaseBdev4_malloc 00:13:54.179 06:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:54.438 true 00:13:54.438 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:54.697 [2024-08-13 06:08:56.298279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:54.697 [2024-08-13 06:08:56.298334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.697 [2024-08-13 06:08:56.298367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:54.697 [2024-08-13 06:08:56.298380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.697 [2024-08-13 06:08:56.300362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.697 [2024-08-13 06:08:56.300410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:54.697 BaseBdev4 00:13:54.697 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:54.956 [2024-08-13 06:08:56.497994] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.956 [2024-08-13 06:08:56.499690] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.956 [2024-08-13 06:08:56.499760] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.956 [2024-08-13 06:08:56.499817] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.956 [2024-08-13 06:08:56.500000] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:13:54.956 [2024-08-13 06:08:56.500013] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:54.956 [2024-08-13 06:08:56.500266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:54.956 [2024-08-13 06:08:56.500402] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:13:54.956 [2024-08-13 06:08:56.500421] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:13:54.956 [2024-08-13 06:08:56.500524] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.956 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.957 "name": "raid_bdev1", 00:13:54.957 "uuid": "061661a9-bdd6-480b-98df-5d8ef13e983b", 00:13:54.957 "strip_size_kb": 64, 00:13:54.957 "state": "online", 00:13:54.957 "raid_level": "raid0", 00:13:54.957 "superblock": true, 00:13:54.957 "num_base_bdevs": 4, 00:13:54.957 "num_base_bdevs_discovered": 4, 00:13:54.957 "num_base_bdevs_operational": 4, 00:13:54.957 "base_bdevs_list": [ 00:13:54.957 { 00:13:54.957 "name": "BaseBdev1", 00:13:54.957 "uuid": "b54ff379-291a-5540-8346-c7c95796e54b", 00:13:54.957 "is_configured": true, 00:13:54.957 "data_offset": 2048, 00:13:54.957 "data_size": 63488 00:13:54.957 }, 00:13:54.957 { 00:13:54.957 "name": "BaseBdev2", 00:13:54.957 "uuid": "5fdd8f71-8033-5ae5-8cf0-2b52bdc74d87", 00:13:54.957 "is_configured": true, 00:13:54.957 "data_offset": 2048, 00:13:54.957 "data_size": 63488 00:13:54.957 }, 00:13:54.957 { 00:13:54.957 "name": "BaseBdev3", 00:13:54.957 "uuid": "03f90cb4-c484-5d10-b989-93df4d2225be", 00:13:54.957 "is_configured": true, 00:13:54.957 "data_offset": 2048, 00:13:54.957 "data_size": 63488 00:13:54.957 }, 00:13:54.957 { 00:13:54.957 "name": "BaseBdev4", 00:13:54.957 "uuid": "67773794-a722-596c-b4df-dbcca1c56054", 00:13:54.957 "is_configured": true, 00:13:54.957 "data_offset": 2048, 00:13:54.957 "data_size": 63488 00:13:54.957 } 00:13:54.957 ] 00:13:54.957 }' 00:13:54.957 06:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.957 06:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.525 06:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:55.525 06:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:55.785 [2024-08-13 06:08:57.372822] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.725 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.985 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.985 "name": "raid_bdev1", 00:13:56.985 "uuid": "061661a9-bdd6-480b-98df-5d8ef13e983b", 00:13:56.985 "strip_size_kb": 64, 00:13:56.985 "state": "online", 00:13:56.985 "raid_level": "raid0", 00:13:56.985 "superblock": true, 00:13:56.985 "num_base_bdevs": 4, 00:13:56.985 "num_base_bdevs_discovered": 4, 00:13:56.985 "num_base_bdevs_operational": 4, 00:13:56.985 "base_bdevs_list": [ 00:13:56.985 { 00:13:56.985 "name": "BaseBdev1", 00:13:56.985 "uuid": "b54ff379-291a-5540-8346-c7c95796e54b", 00:13:56.985 "is_configured": true, 00:13:56.985 "data_offset": 2048, 00:13:56.985 "data_size": 63488 00:13:56.985 }, 00:13:56.985 { 00:13:56.985 "name": "BaseBdev2", 00:13:56.985 "uuid": "5fdd8f71-8033-5ae5-8cf0-2b52bdc74d87", 00:13:56.985 "is_configured": true, 00:13:56.985 "data_offset": 2048, 00:13:56.985 "data_size": 63488 00:13:56.985 }, 00:13:56.985 { 00:13:56.985 "name": "BaseBdev3", 00:13:56.985 "uuid": "03f90cb4-c484-5d10-b989-93df4d2225be", 00:13:56.985 "is_configured": true, 00:13:56.985 "data_offset": 2048, 00:13:56.985 "data_size": 63488 00:13:56.985 }, 00:13:56.985 { 00:13:56.985 "name": "BaseBdev4", 00:13:56.985 "uuid": "67773794-a722-596c-b4df-dbcca1c56054", 00:13:56.985 "is_configured": true, 00:13:56.985 "data_offset": 2048, 00:13:56.985 "data_size": 63488 00:13:56.985 } 00:13:56.985 ] 00:13:56.985 }' 00:13:56.985 06:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.985 06:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.555 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:57.815 [2024-08-13 06:08:59.422678] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.815 [2024-08-13 06:08:59.422817] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.815 [2024-08-13 06:08:59.424941] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.815 [2024-08-13 06:08:59.425056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.815 [2024-08-13 06:08:59.425116] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.815 [2024-08-13 06:08:59.425169] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:13:57.815 0 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 85864 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 85864 ']' 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 85864 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85864 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85864' 00:13:57.815 killing process with pid 85864 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 85864 00:13:57.815 [2024-08-13 06:08:59.474616] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.815 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 85864 00:13:57.815 [2024-08-13 06:08:59.509789] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.dmkHe1xPVv 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:13:58.076 00:13:58.076 real 0m6.674s 00:13:58.076 user 0m10.467s 00:13:58.076 sys 0m1.042s 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.076 06:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.076 ************************************ 00:13:58.076 END TEST raid_write_error_test 00:13:58.076 ************************************ 00:13:58.076 06:08:59 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:58.076 06:08:59 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:58.076 06:08:59 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:58.076 06:08:59 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.076 06:08:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.076 ************************************ 00:13:58.076 START TEST raid_state_function_test 00:13:58.076 ************************************ 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:58.076 Process raid pid: 86045 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=86045 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 86045' 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 86045 /var/tmp/spdk-raid.sock 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 86045 ']' 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:58.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.076 06:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.336 [2024-08-13 06:08:59.928956] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:13:58.336 [2024-08-13 06:08:59.929199] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.336 [2024-08-13 06:09:00.075603] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.336 [2024-08-13 06:09:00.122348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.596 [2024-08-13 06:09:00.165066] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.596 [2024-08-13 06:09:00.165180] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:59.165 [2024-08-13 06:09:00.900720] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.165 [2024-08-13 06:09:00.900833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.165 [2024-08-13 06:09:00.900863] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.165 [2024-08-13 06:09:00.900882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.165 [2024-08-13 06:09:00.900903] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.165 [2024-08-13 06:09:00.900921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:59.165 [2024-08-13 06:09:00.900940] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:59.165 [2024-08-13 06:09:00.900957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.165 06:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.426 06:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.426 "name": "Existed_Raid", 00:13:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.426 "strip_size_kb": 64, 00:13:59.426 "state": "configuring", 00:13:59.426 "raid_level": "concat", 00:13:59.426 "superblock": false, 00:13:59.426 "num_base_bdevs": 4, 00:13:59.426 "num_base_bdevs_discovered": 0, 00:13:59.426 "num_base_bdevs_operational": 4, 00:13:59.426 "base_bdevs_list": [ 00:13:59.426 { 00:13:59.426 "name": "BaseBdev1", 00:13:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.426 "is_configured": false, 00:13:59.426 "data_offset": 0, 00:13:59.426 "data_size": 0 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "name": "BaseBdev2", 00:13:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.426 "is_configured": false, 00:13:59.426 "data_offset": 0, 00:13:59.426 "data_size": 0 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "name": "BaseBdev3", 00:13:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.426 "is_configured": false, 00:13:59.426 "data_offset": 0, 00:13:59.426 "data_size": 0 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "name": "BaseBdev4", 00:13:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.426 "is_configured": false, 00:13:59.426 "data_offset": 0, 00:13:59.426 "data_size": 0 00:13:59.426 } 00:13:59.426 ] 00:13:59.426 }' 00:13:59.426 06:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.426 06:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.995 06:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:00.255 [2024-08-13 06:09:01.823084] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.255 [2024-08-13 06:09:01.823124] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:00.255 06:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:00.255 [2024-08-13 06:09:02.010750] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.255 [2024-08-13 06:09:02.010791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.255 [2024-08-13 06:09:02.010802] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.255 [2024-08-13 06:09:02.010809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.255 [2024-08-13 06:09:02.010816] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.255 [2024-08-13 06:09:02.010822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.255 [2024-08-13 06:09:02.010830] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:00.255 [2024-08-13 06:09:02.010835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:00.255 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.519 [2024-08-13 06:09:02.222983] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.519 BaseBdev1 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:00.519 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.780 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.039 [ 00:14:01.039 { 00:14:01.039 "name": "BaseBdev1", 00:14:01.039 "aliases": [ 00:14:01.039 "13eedeb8-de67-4bb2-8524-8c151ff2aee0" 00:14:01.039 ], 00:14:01.039 "product_name": "Malloc disk", 00:14:01.039 "block_size": 512, 00:14:01.039 "num_blocks": 65536, 00:14:01.039 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:01.040 "assigned_rate_limits": { 00:14:01.040 "rw_ios_per_sec": 0, 00:14:01.040 "rw_mbytes_per_sec": 0, 00:14:01.040 "r_mbytes_per_sec": 0, 00:14:01.040 "w_mbytes_per_sec": 0 00:14:01.040 }, 00:14:01.040 "claimed": true, 00:14:01.040 "claim_type": "exclusive_write", 00:14:01.040 "zoned": false, 00:14:01.040 "supported_io_types": { 00:14:01.040 "read": true, 00:14:01.040 "write": true, 00:14:01.040 "unmap": true, 00:14:01.040 "flush": true, 00:14:01.040 "reset": true, 00:14:01.040 "nvme_admin": false, 00:14:01.040 "nvme_io": false, 00:14:01.040 "nvme_io_md": false, 00:14:01.040 "write_zeroes": true, 00:14:01.040 "zcopy": true, 00:14:01.040 "get_zone_info": false, 00:14:01.040 "zone_management": false, 00:14:01.040 "zone_append": false, 00:14:01.040 "compare": false, 00:14:01.040 "compare_and_write": false, 00:14:01.040 "abort": true, 00:14:01.040 "seek_hole": false, 00:14:01.040 "seek_data": false, 00:14:01.040 "copy": true, 00:14:01.040 "nvme_iov_md": false 00:14:01.040 }, 00:14:01.040 "memory_domains": [ 00:14:01.040 { 00:14:01.040 "dma_device_id": "system", 00:14:01.040 "dma_device_type": 1 00:14:01.040 }, 00:14:01.040 { 00:14:01.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.040 "dma_device_type": 2 00:14:01.040 } 00:14:01.040 ], 00:14:01.040 "driver_specific": {} 00:14:01.040 } 00:14:01.040 ] 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.040 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.300 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.300 "name": "Existed_Raid", 00:14:01.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.300 "strip_size_kb": 64, 00:14:01.300 "state": "configuring", 00:14:01.300 "raid_level": "concat", 00:14:01.300 "superblock": false, 00:14:01.300 "num_base_bdevs": 4, 00:14:01.300 "num_base_bdevs_discovered": 1, 00:14:01.300 "num_base_bdevs_operational": 4, 00:14:01.300 "base_bdevs_list": [ 00:14:01.300 { 00:14:01.300 "name": "BaseBdev1", 00:14:01.300 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:01.300 "is_configured": true, 00:14:01.300 "data_offset": 0, 00:14:01.300 "data_size": 65536 00:14:01.300 }, 00:14:01.300 { 00:14:01.300 "name": "BaseBdev2", 00:14:01.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.300 "is_configured": false, 00:14:01.300 "data_offset": 0, 00:14:01.300 "data_size": 0 00:14:01.300 }, 00:14:01.300 { 00:14:01.300 "name": "BaseBdev3", 00:14:01.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.300 "is_configured": false, 00:14:01.300 "data_offset": 0, 00:14:01.300 "data_size": 0 00:14:01.300 }, 00:14:01.300 { 00:14:01.300 "name": "BaseBdev4", 00:14:01.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.300 "is_configured": false, 00:14:01.300 "data_offset": 0, 00:14:01.300 "data_size": 0 00:14:01.300 } 00:14:01.300 ] 00:14:01.300 }' 00:14:01.300 06:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.300 06:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.869 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:01.869 [2024-08-13 06:09:03.536903] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.869 [2024-08-13 06:09:03.537035] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:01.869 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:02.129 [2024-08-13 06:09:03.712652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.129 [2024-08-13 06:09:03.714347] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.129 [2024-08-13 06:09:03.714416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.129 [2024-08-13 06:09:03.714448] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.129 [2024-08-13 06:09:03.714467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.129 [2024-08-13 06:09:03.714485] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:02.129 [2024-08-13 06:09:03.714502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.129 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.389 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.389 "name": "Existed_Raid", 00:14:02.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.389 "strip_size_kb": 64, 00:14:02.389 "state": "configuring", 00:14:02.389 "raid_level": "concat", 00:14:02.389 "superblock": false, 00:14:02.389 "num_base_bdevs": 4, 00:14:02.389 "num_base_bdevs_discovered": 1, 00:14:02.389 "num_base_bdevs_operational": 4, 00:14:02.389 "base_bdevs_list": [ 00:14:02.389 { 00:14:02.389 "name": "BaseBdev1", 00:14:02.389 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:02.389 "is_configured": true, 00:14:02.389 "data_offset": 0, 00:14:02.389 "data_size": 65536 00:14:02.389 }, 00:14:02.389 { 00:14:02.389 "name": "BaseBdev2", 00:14:02.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.389 "is_configured": false, 00:14:02.389 "data_offset": 0, 00:14:02.389 "data_size": 0 00:14:02.389 }, 00:14:02.389 { 00:14:02.389 "name": "BaseBdev3", 00:14:02.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.389 "is_configured": false, 00:14:02.389 "data_offset": 0, 00:14:02.389 "data_size": 0 00:14:02.389 }, 00:14:02.389 { 00:14:02.389 "name": "BaseBdev4", 00:14:02.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.389 "is_configured": false, 00:14:02.389 "data_offset": 0, 00:14:02.389 "data_size": 0 00:14:02.389 } 00:14:02.389 ] 00:14:02.389 }' 00:14:02.389 06:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.389 06:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.959 [2024-08-13 06:09:04.713616] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.959 BaseBdev2 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:02.959 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.218 06:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.478 [ 00:14:03.478 { 00:14:03.478 "name": "BaseBdev2", 00:14:03.478 "aliases": [ 00:14:03.478 "a0362cb9-5b45-4ba1-a16a-a81dc7406da0" 00:14:03.478 ], 00:14:03.478 "product_name": "Malloc disk", 00:14:03.478 "block_size": 512, 00:14:03.478 "num_blocks": 65536, 00:14:03.478 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:03.478 "assigned_rate_limits": { 00:14:03.478 "rw_ios_per_sec": 0, 00:14:03.478 "rw_mbytes_per_sec": 0, 00:14:03.478 "r_mbytes_per_sec": 0, 00:14:03.478 "w_mbytes_per_sec": 0 00:14:03.478 }, 00:14:03.478 "claimed": true, 00:14:03.478 "claim_type": "exclusive_write", 00:14:03.478 "zoned": false, 00:14:03.478 "supported_io_types": { 00:14:03.478 "read": true, 00:14:03.478 "write": true, 00:14:03.478 "unmap": true, 00:14:03.478 "flush": true, 00:14:03.478 "reset": true, 00:14:03.478 "nvme_admin": false, 00:14:03.478 "nvme_io": false, 00:14:03.478 "nvme_io_md": false, 00:14:03.478 "write_zeroes": true, 00:14:03.478 "zcopy": true, 00:14:03.478 "get_zone_info": false, 00:14:03.478 "zone_management": false, 00:14:03.478 "zone_append": false, 00:14:03.478 "compare": false, 00:14:03.478 "compare_and_write": false, 00:14:03.478 "abort": true, 00:14:03.478 "seek_hole": false, 00:14:03.478 "seek_data": false, 00:14:03.478 "copy": true, 00:14:03.478 "nvme_iov_md": false 00:14:03.478 }, 00:14:03.478 "memory_domains": [ 00:14:03.478 { 00:14:03.478 "dma_device_id": "system", 00:14:03.478 "dma_device_type": 1 00:14:03.478 }, 00:14:03.478 { 00:14:03.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.478 "dma_device_type": 2 00:14:03.478 } 00:14:03.478 ], 00:14:03.478 "driver_specific": {} 00:14:03.478 } 00:14:03.478 ] 00:14:03.478 06:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:03.478 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:03.478 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:03.478 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.479 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.739 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.739 "name": "Existed_Raid", 00:14:03.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.739 "strip_size_kb": 64, 00:14:03.739 "state": "configuring", 00:14:03.739 "raid_level": "concat", 00:14:03.739 "superblock": false, 00:14:03.739 "num_base_bdevs": 4, 00:14:03.739 "num_base_bdevs_discovered": 2, 00:14:03.739 "num_base_bdevs_operational": 4, 00:14:03.739 "base_bdevs_list": [ 00:14:03.739 { 00:14:03.739 "name": "BaseBdev1", 00:14:03.739 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:03.739 "is_configured": true, 00:14:03.739 "data_offset": 0, 00:14:03.739 "data_size": 65536 00:14:03.739 }, 00:14:03.739 { 00:14:03.739 "name": "BaseBdev2", 00:14:03.739 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:03.739 "is_configured": true, 00:14:03.739 "data_offset": 0, 00:14:03.739 "data_size": 65536 00:14:03.739 }, 00:14:03.739 { 00:14:03.739 "name": "BaseBdev3", 00:14:03.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.739 "is_configured": false, 00:14:03.739 "data_offset": 0, 00:14:03.739 "data_size": 0 00:14:03.739 }, 00:14:03.739 { 00:14:03.739 "name": "BaseBdev4", 00:14:03.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.739 "is_configured": false, 00:14:03.739 "data_offset": 0, 00:14:03.739 "data_size": 0 00:14:03.739 } 00:14:03.739 ] 00:14:03.739 }' 00:14:03.739 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.739 06:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.308 06:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:04.308 [2024-08-13 06:09:06.046347] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.308 BaseBdev3 00:14:04.308 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:04.308 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:04.308 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:04.309 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:04.309 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:04.309 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:04.309 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:04.568 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:04.828 [ 00:14:04.828 { 00:14:04.828 "name": "BaseBdev3", 00:14:04.828 "aliases": [ 00:14:04.828 "6ced4774-e6dd-49df-9fad-7ee2b5ef9851" 00:14:04.828 ], 00:14:04.828 "product_name": "Malloc disk", 00:14:04.828 "block_size": 512, 00:14:04.828 "num_blocks": 65536, 00:14:04.828 "uuid": "6ced4774-e6dd-49df-9fad-7ee2b5ef9851", 00:14:04.828 "assigned_rate_limits": { 00:14:04.828 "rw_ios_per_sec": 0, 00:14:04.828 "rw_mbytes_per_sec": 0, 00:14:04.828 "r_mbytes_per_sec": 0, 00:14:04.828 "w_mbytes_per_sec": 0 00:14:04.828 }, 00:14:04.828 "claimed": true, 00:14:04.828 "claim_type": "exclusive_write", 00:14:04.828 "zoned": false, 00:14:04.828 "supported_io_types": { 00:14:04.828 "read": true, 00:14:04.828 "write": true, 00:14:04.828 "unmap": true, 00:14:04.828 "flush": true, 00:14:04.828 "reset": true, 00:14:04.828 "nvme_admin": false, 00:14:04.828 "nvme_io": false, 00:14:04.828 "nvme_io_md": false, 00:14:04.828 "write_zeroes": true, 00:14:04.828 "zcopy": true, 00:14:04.828 "get_zone_info": false, 00:14:04.828 "zone_management": false, 00:14:04.828 "zone_append": false, 00:14:04.828 "compare": false, 00:14:04.828 "compare_and_write": false, 00:14:04.828 "abort": true, 00:14:04.828 "seek_hole": false, 00:14:04.828 "seek_data": false, 00:14:04.828 "copy": true, 00:14:04.828 "nvme_iov_md": false 00:14:04.828 }, 00:14:04.828 "memory_domains": [ 00:14:04.828 { 00:14:04.828 "dma_device_id": "system", 00:14:04.828 "dma_device_type": 1 00:14:04.828 }, 00:14:04.828 { 00:14:04.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.828 "dma_device_type": 2 00:14:04.828 } 00:14:04.828 ], 00:14:04.828 "driver_specific": {} 00:14:04.828 } 00:14:04.828 ] 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.828 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.088 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.088 "name": "Existed_Raid", 00:14:05.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.088 "strip_size_kb": 64, 00:14:05.088 "state": "configuring", 00:14:05.088 "raid_level": "concat", 00:14:05.088 "superblock": false, 00:14:05.088 "num_base_bdevs": 4, 00:14:05.088 "num_base_bdevs_discovered": 3, 00:14:05.088 "num_base_bdevs_operational": 4, 00:14:05.088 "base_bdevs_list": [ 00:14:05.088 { 00:14:05.088 "name": "BaseBdev1", 00:14:05.088 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:05.088 "is_configured": true, 00:14:05.088 "data_offset": 0, 00:14:05.088 "data_size": 65536 00:14:05.088 }, 00:14:05.088 { 00:14:05.088 "name": "BaseBdev2", 00:14:05.088 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:05.088 "is_configured": true, 00:14:05.088 "data_offset": 0, 00:14:05.088 "data_size": 65536 00:14:05.088 }, 00:14:05.088 { 00:14:05.088 "name": "BaseBdev3", 00:14:05.088 "uuid": "6ced4774-e6dd-49df-9fad-7ee2b5ef9851", 00:14:05.088 "is_configured": true, 00:14:05.088 "data_offset": 0, 00:14:05.088 "data_size": 65536 00:14:05.088 }, 00:14:05.088 { 00:14:05.088 "name": "BaseBdev4", 00:14:05.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.088 "is_configured": false, 00:14:05.088 "data_offset": 0, 00:14:05.088 "data_size": 0 00:14:05.088 } 00:14:05.088 ] 00:14:05.088 }' 00:14:05.088 06:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.088 06:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:05.657 [2024-08-13 06:09:07.343186] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.657 [2024-08-13 06:09:07.343235] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:05.657 [2024-08-13 06:09:07.343250] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:05.657 [2024-08-13 06:09:07.343524] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:05.657 [2024-08-13 06:09:07.343656] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:05.657 [2024-08-13 06:09:07.343672] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:05.657 [2024-08-13 06:09:07.343835] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.657 BaseBdev4 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:05.657 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.917 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:06.177 [ 00:14:06.177 { 00:14:06.177 "name": "BaseBdev4", 00:14:06.177 "aliases": [ 00:14:06.177 "ce1284cf-b091-44a9-9dd7-0ea4eae2103e" 00:14:06.177 ], 00:14:06.177 "product_name": "Malloc disk", 00:14:06.177 "block_size": 512, 00:14:06.177 "num_blocks": 65536, 00:14:06.177 "uuid": "ce1284cf-b091-44a9-9dd7-0ea4eae2103e", 00:14:06.177 "assigned_rate_limits": { 00:14:06.177 "rw_ios_per_sec": 0, 00:14:06.177 "rw_mbytes_per_sec": 0, 00:14:06.177 "r_mbytes_per_sec": 0, 00:14:06.177 "w_mbytes_per_sec": 0 00:14:06.177 }, 00:14:06.177 "claimed": true, 00:14:06.177 "claim_type": "exclusive_write", 00:14:06.177 "zoned": false, 00:14:06.177 "supported_io_types": { 00:14:06.177 "read": true, 00:14:06.177 "write": true, 00:14:06.177 "unmap": true, 00:14:06.177 "flush": true, 00:14:06.177 "reset": true, 00:14:06.177 "nvme_admin": false, 00:14:06.177 "nvme_io": false, 00:14:06.177 "nvme_io_md": false, 00:14:06.177 "write_zeroes": true, 00:14:06.177 "zcopy": true, 00:14:06.177 "get_zone_info": false, 00:14:06.177 "zone_management": false, 00:14:06.177 "zone_append": false, 00:14:06.177 "compare": false, 00:14:06.177 "compare_and_write": false, 00:14:06.177 "abort": true, 00:14:06.177 "seek_hole": false, 00:14:06.177 "seek_data": false, 00:14:06.177 "copy": true, 00:14:06.177 "nvme_iov_md": false 00:14:06.177 }, 00:14:06.177 "memory_domains": [ 00:14:06.177 { 00:14:06.177 "dma_device_id": "system", 00:14:06.177 "dma_device_type": 1 00:14:06.177 }, 00:14:06.177 { 00:14:06.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.177 "dma_device_type": 2 00:14:06.177 } 00:14:06.177 ], 00:14:06.177 "driver_specific": {} 00:14:06.177 } 00:14:06.177 ] 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.177 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.177 "name": "Existed_Raid", 00:14:06.177 "uuid": "ba91b50c-a9fe-47c5-b13d-b2158e8a1f57", 00:14:06.177 "strip_size_kb": 64, 00:14:06.177 "state": "online", 00:14:06.177 "raid_level": "concat", 00:14:06.177 "superblock": false, 00:14:06.177 "num_base_bdevs": 4, 00:14:06.177 "num_base_bdevs_discovered": 4, 00:14:06.177 "num_base_bdevs_operational": 4, 00:14:06.177 "base_bdevs_list": [ 00:14:06.177 { 00:14:06.177 "name": "BaseBdev1", 00:14:06.177 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:06.177 "is_configured": true, 00:14:06.177 "data_offset": 0, 00:14:06.177 "data_size": 65536 00:14:06.178 }, 00:14:06.178 { 00:14:06.178 "name": "BaseBdev2", 00:14:06.178 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:06.178 "is_configured": true, 00:14:06.178 "data_offset": 0, 00:14:06.178 "data_size": 65536 00:14:06.178 }, 00:14:06.178 { 00:14:06.178 "name": "BaseBdev3", 00:14:06.178 "uuid": "6ced4774-e6dd-49df-9fad-7ee2b5ef9851", 00:14:06.178 "is_configured": true, 00:14:06.178 "data_offset": 0, 00:14:06.178 "data_size": 65536 00:14:06.178 }, 00:14:06.178 { 00:14:06.178 "name": "BaseBdev4", 00:14:06.178 "uuid": "ce1284cf-b091-44a9-9dd7-0ea4eae2103e", 00:14:06.178 "is_configured": true, 00:14:06.178 "data_offset": 0, 00:14:06.178 "data_size": 65536 00:14:06.178 } 00:14:06.178 ] 00:14:06.178 }' 00:14:06.178 06:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.178 06:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:06.747 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:07.007 [2024-08-13 06:09:08.601614] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.007 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:07.007 "name": "Existed_Raid", 00:14:07.007 "aliases": [ 00:14:07.007 "ba91b50c-a9fe-47c5-b13d-b2158e8a1f57" 00:14:07.007 ], 00:14:07.007 "product_name": "Raid Volume", 00:14:07.007 "block_size": 512, 00:14:07.007 "num_blocks": 262144, 00:14:07.007 "uuid": "ba91b50c-a9fe-47c5-b13d-b2158e8a1f57", 00:14:07.007 "assigned_rate_limits": { 00:14:07.007 "rw_ios_per_sec": 0, 00:14:07.007 "rw_mbytes_per_sec": 0, 00:14:07.007 "r_mbytes_per_sec": 0, 00:14:07.007 "w_mbytes_per_sec": 0 00:14:07.007 }, 00:14:07.007 "claimed": false, 00:14:07.007 "zoned": false, 00:14:07.007 "supported_io_types": { 00:14:07.007 "read": true, 00:14:07.007 "write": true, 00:14:07.007 "unmap": true, 00:14:07.007 "flush": true, 00:14:07.007 "reset": true, 00:14:07.007 "nvme_admin": false, 00:14:07.007 "nvme_io": false, 00:14:07.007 "nvme_io_md": false, 00:14:07.007 "write_zeroes": true, 00:14:07.007 "zcopy": false, 00:14:07.007 "get_zone_info": false, 00:14:07.007 "zone_management": false, 00:14:07.007 "zone_append": false, 00:14:07.007 "compare": false, 00:14:07.007 "compare_and_write": false, 00:14:07.007 "abort": false, 00:14:07.007 "seek_hole": false, 00:14:07.007 "seek_data": false, 00:14:07.007 "copy": false, 00:14:07.007 "nvme_iov_md": false 00:14:07.007 }, 00:14:07.007 "memory_domains": [ 00:14:07.007 { 00:14:07.007 "dma_device_id": "system", 00:14:07.007 "dma_device_type": 1 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.007 "dma_device_type": 2 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "system", 00:14:07.007 "dma_device_type": 1 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.007 "dma_device_type": 2 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "system", 00:14:07.007 "dma_device_type": 1 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.007 "dma_device_type": 2 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "system", 00:14:07.007 "dma_device_type": 1 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.007 "dma_device_type": 2 00:14:07.007 } 00:14:07.007 ], 00:14:07.007 "driver_specific": { 00:14:07.007 "raid": { 00:14:07.007 "uuid": "ba91b50c-a9fe-47c5-b13d-b2158e8a1f57", 00:14:07.007 "strip_size_kb": 64, 00:14:07.007 "state": "online", 00:14:07.007 "raid_level": "concat", 00:14:07.007 "superblock": false, 00:14:07.007 "num_base_bdevs": 4, 00:14:07.007 "num_base_bdevs_discovered": 4, 00:14:07.007 "num_base_bdevs_operational": 4, 00:14:07.007 "base_bdevs_list": [ 00:14:07.007 { 00:14:07.007 "name": "BaseBdev1", 00:14:07.007 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:07.007 "is_configured": true, 00:14:07.007 "data_offset": 0, 00:14:07.007 "data_size": 65536 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "name": "BaseBdev2", 00:14:07.007 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:07.007 "is_configured": true, 00:14:07.007 "data_offset": 0, 00:14:07.007 "data_size": 65536 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "name": "BaseBdev3", 00:14:07.007 "uuid": "6ced4774-e6dd-49df-9fad-7ee2b5ef9851", 00:14:07.007 "is_configured": true, 00:14:07.007 "data_offset": 0, 00:14:07.007 "data_size": 65536 00:14:07.007 }, 00:14:07.007 { 00:14:07.007 "name": "BaseBdev4", 00:14:07.007 "uuid": "ce1284cf-b091-44a9-9dd7-0ea4eae2103e", 00:14:07.007 "is_configured": true, 00:14:07.007 "data_offset": 0, 00:14:07.007 "data_size": 65536 00:14:07.007 } 00:14:07.007 ] 00:14:07.007 } 00:14:07.007 } 00:14:07.007 }' 00:14:07.007 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.007 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:07.007 BaseBdev2 00:14:07.007 BaseBdev3 00:14:07.007 BaseBdev4' 00:14:07.007 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:07.007 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.007 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:07.267 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.267 "name": "BaseBdev1", 00:14:07.267 "aliases": [ 00:14:07.267 "13eedeb8-de67-4bb2-8524-8c151ff2aee0" 00:14:07.267 ], 00:14:07.267 "product_name": "Malloc disk", 00:14:07.267 "block_size": 512, 00:14:07.267 "num_blocks": 65536, 00:14:07.267 "uuid": "13eedeb8-de67-4bb2-8524-8c151ff2aee0", 00:14:07.267 "assigned_rate_limits": { 00:14:07.267 "rw_ios_per_sec": 0, 00:14:07.267 "rw_mbytes_per_sec": 0, 00:14:07.267 "r_mbytes_per_sec": 0, 00:14:07.267 "w_mbytes_per_sec": 0 00:14:07.267 }, 00:14:07.267 "claimed": true, 00:14:07.267 "claim_type": "exclusive_write", 00:14:07.267 "zoned": false, 00:14:07.267 "supported_io_types": { 00:14:07.267 "read": true, 00:14:07.267 "write": true, 00:14:07.267 "unmap": true, 00:14:07.267 "flush": true, 00:14:07.267 "reset": true, 00:14:07.267 "nvme_admin": false, 00:14:07.267 "nvme_io": false, 00:14:07.267 "nvme_io_md": false, 00:14:07.267 "write_zeroes": true, 00:14:07.267 "zcopy": true, 00:14:07.267 "get_zone_info": false, 00:14:07.267 "zone_management": false, 00:14:07.267 "zone_append": false, 00:14:07.267 "compare": false, 00:14:07.267 "compare_and_write": false, 00:14:07.267 "abort": true, 00:14:07.267 "seek_hole": false, 00:14:07.267 "seek_data": false, 00:14:07.267 "copy": true, 00:14:07.267 "nvme_iov_md": false 00:14:07.267 }, 00:14:07.267 "memory_domains": [ 00:14:07.267 { 00:14:07.267 "dma_device_id": "system", 00:14:07.267 "dma_device_type": 1 00:14:07.267 }, 00:14:07.267 { 00:14:07.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.267 "dma_device_type": 2 00:14:07.267 } 00:14:07.267 ], 00:14:07.267 "driver_specific": {} 00:14:07.267 }' 00:14:07.267 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.267 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.267 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.267 06:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.267 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:07.526 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.786 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.786 "name": "BaseBdev2", 00:14:07.786 "aliases": [ 00:14:07.786 "a0362cb9-5b45-4ba1-a16a-a81dc7406da0" 00:14:07.786 ], 00:14:07.786 "product_name": "Malloc disk", 00:14:07.786 "block_size": 512, 00:14:07.786 "num_blocks": 65536, 00:14:07.786 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:07.786 "assigned_rate_limits": { 00:14:07.786 "rw_ios_per_sec": 0, 00:14:07.786 "rw_mbytes_per_sec": 0, 00:14:07.786 "r_mbytes_per_sec": 0, 00:14:07.786 "w_mbytes_per_sec": 0 00:14:07.786 }, 00:14:07.786 "claimed": true, 00:14:07.786 "claim_type": "exclusive_write", 00:14:07.786 "zoned": false, 00:14:07.786 "supported_io_types": { 00:14:07.786 "read": true, 00:14:07.786 "write": true, 00:14:07.786 "unmap": true, 00:14:07.786 "flush": true, 00:14:07.786 "reset": true, 00:14:07.786 "nvme_admin": false, 00:14:07.786 "nvme_io": false, 00:14:07.786 "nvme_io_md": false, 00:14:07.786 "write_zeroes": true, 00:14:07.786 "zcopy": true, 00:14:07.786 "get_zone_info": false, 00:14:07.786 "zone_management": false, 00:14:07.786 "zone_append": false, 00:14:07.786 "compare": false, 00:14:07.786 "compare_and_write": false, 00:14:07.786 "abort": true, 00:14:07.786 "seek_hole": false, 00:14:07.786 "seek_data": false, 00:14:07.786 "copy": true, 00:14:07.786 "nvme_iov_md": false 00:14:07.786 }, 00:14:07.786 "memory_domains": [ 00:14:07.786 { 00:14:07.786 "dma_device_id": "system", 00:14:07.786 "dma_device_type": 1 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.786 "dma_device_type": 2 00:14:07.786 } 00:14:07.786 ], 00:14:07.786 "driver_specific": {} 00:14:07.786 }' 00:14:07.786 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.786 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.786 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.786 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.786 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:08.046 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.305 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.305 "name": "BaseBdev3", 00:14:08.305 "aliases": [ 00:14:08.305 "6ced4774-e6dd-49df-9fad-7ee2b5ef9851" 00:14:08.305 ], 00:14:08.305 "product_name": "Malloc disk", 00:14:08.305 "block_size": 512, 00:14:08.305 "num_blocks": 65536, 00:14:08.305 "uuid": "6ced4774-e6dd-49df-9fad-7ee2b5ef9851", 00:14:08.305 "assigned_rate_limits": { 00:14:08.305 "rw_ios_per_sec": 0, 00:14:08.305 "rw_mbytes_per_sec": 0, 00:14:08.305 "r_mbytes_per_sec": 0, 00:14:08.305 "w_mbytes_per_sec": 0 00:14:08.305 }, 00:14:08.305 "claimed": true, 00:14:08.305 "claim_type": "exclusive_write", 00:14:08.305 "zoned": false, 00:14:08.305 "supported_io_types": { 00:14:08.305 "read": true, 00:14:08.305 "write": true, 00:14:08.305 "unmap": true, 00:14:08.305 "flush": true, 00:14:08.305 "reset": true, 00:14:08.305 "nvme_admin": false, 00:14:08.305 "nvme_io": false, 00:14:08.305 "nvme_io_md": false, 00:14:08.305 "write_zeroes": true, 00:14:08.305 "zcopy": true, 00:14:08.305 "get_zone_info": false, 00:14:08.305 "zone_management": false, 00:14:08.305 "zone_append": false, 00:14:08.305 "compare": false, 00:14:08.305 "compare_and_write": false, 00:14:08.305 "abort": true, 00:14:08.305 "seek_hole": false, 00:14:08.305 "seek_data": false, 00:14:08.305 "copy": true, 00:14:08.305 "nvme_iov_md": false 00:14:08.305 }, 00:14:08.305 "memory_domains": [ 00:14:08.305 { 00:14:08.305 "dma_device_id": "system", 00:14:08.305 "dma_device_type": 1 00:14:08.305 }, 00:14:08.305 { 00:14:08.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.305 "dma_device_type": 2 00:14:08.305 } 00:14:08.305 ], 00:14:08.305 "driver_specific": {} 00:14:08.305 }' 00:14:08.305 06:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.305 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.305 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:08.305 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:08.564 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.824 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.824 "name": "BaseBdev4", 00:14:08.824 "aliases": [ 00:14:08.824 "ce1284cf-b091-44a9-9dd7-0ea4eae2103e" 00:14:08.824 ], 00:14:08.824 "product_name": "Malloc disk", 00:14:08.824 "block_size": 512, 00:14:08.824 "num_blocks": 65536, 00:14:08.824 "uuid": "ce1284cf-b091-44a9-9dd7-0ea4eae2103e", 00:14:08.824 "assigned_rate_limits": { 00:14:08.824 "rw_ios_per_sec": 0, 00:14:08.824 "rw_mbytes_per_sec": 0, 00:14:08.824 "r_mbytes_per_sec": 0, 00:14:08.824 "w_mbytes_per_sec": 0 00:14:08.824 }, 00:14:08.824 "claimed": true, 00:14:08.824 "claim_type": "exclusive_write", 00:14:08.824 "zoned": false, 00:14:08.824 "supported_io_types": { 00:14:08.824 "read": true, 00:14:08.824 "write": true, 00:14:08.824 "unmap": true, 00:14:08.824 "flush": true, 00:14:08.824 "reset": true, 00:14:08.824 "nvme_admin": false, 00:14:08.824 "nvme_io": false, 00:14:08.824 "nvme_io_md": false, 00:14:08.824 "write_zeroes": true, 00:14:08.824 "zcopy": true, 00:14:08.824 "get_zone_info": false, 00:14:08.824 "zone_management": false, 00:14:08.824 "zone_append": false, 00:14:08.824 "compare": false, 00:14:08.824 "compare_and_write": false, 00:14:08.824 "abort": true, 00:14:08.824 "seek_hole": false, 00:14:08.824 "seek_data": false, 00:14:08.824 "copy": true, 00:14:08.824 "nvme_iov_md": false 00:14:08.824 }, 00:14:08.824 "memory_domains": [ 00:14:08.824 { 00:14:08.824 "dma_device_id": "system", 00:14:08.824 "dma_device_type": 1 00:14:08.824 }, 00:14:08.824 { 00:14:08.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.824 "dma_device_type": 2 00:14:08.824 } 00:14:08.824 ], 00:14:08.824 "driver_specific": {} 00:14:08.824 }' 00:14:08.824 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.824 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.084 06:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:09.343 [2024-08-13 06:09:11.017433] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.344 [2024-08-13 06:09:11.017462] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.344 [2024-08-13 06:09:11.017524] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.344 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.603 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.603 "name": "Existed_Raid", 00:14:09.603 "uuid": "ba91b50c-a9fe-47c5-b13d-b2158e8a1f57", 00:14:09.603 "strip_size_kb": 64, 00:14:09.603 "state": "offline", 00:14:09.603 "raid_level": "concat", 00:14:09.603 "superblock": false, 00:14:09.603 "num_base_bdevs": 4, 00:14:09.603 "num_base_bdevs_discovered": 3, 00:14:09.603 "num_base_bdevs_operational": 3, 00:14:09.603 "base_bdevs_list": [ 00:14:09.603 { 00:14:09.603 "name": null, 00:14:09.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.603 "is_configured": false, 00:14:09.603 "data_offset": 0, 00:14:09.603 "data_size": 65536 00:14:09.603 }, 00:14:09.603 { 00:14:09.603 "name": "BaseBdev2", 00:14:09.603 "uuid": "a0362cb9-5b45-4ba1-a16a-a81dc7406da0", 00:14:09.603 "is_configured": true, 00:14:09.603 "data_offset": 0, 00:14:09.603 "data_size": 65536 00:14:09.603 }, 00:14:09.603 { 00:14:09.603 "name": "BaseBdev3", 00:14:09.603 "uuid": "6ced4774-e6dd-49df-9fad-7ee2b5ef9851", 00:14:09.603 "is_configured": true, 00:14:09.603 "data_offset": 0, 00:14:09.603 "data_size": 65536 00:14:09.603 }, 00:14:09.604 { 00:14:09.604 "name": "BaseBdev4", 00:14:09.604 "uuid": "ce1284cf-b091-44a9-9dd7-0ea4eae2103e", 00:14:09.604 "is_configured": true, 00:14:09.604 "data_offset": 0, 00:14:09.604 "data_size": 65536 00:14:09.604 } 00:14:09.604 ] 00:14:09.604 }' 00:14:09.604 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.604 06:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:10.173 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.173 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.173 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:10.432 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:10.432 06:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.432 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:10.432 [2024-08-13 06:09:12.162388] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.432 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:10.432 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.432 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.432 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:10.692 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:10.692 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.692 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:10.951 [2024-08-13 06:09:12.556934] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:10.951 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:10.951 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.951 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.951 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:11.211 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:11.211 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.211 06:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:11.211 [2024-08-13 06:09:12.979300] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:11.211 [2024-08-13 06:09:12.979345] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:11.471 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.731 BaseBdev2 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:11.731 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.990 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.250 [ 00:14:12.250 { 00:14:12.250 "name": "BaseBdev2", 00:14:12.250 "aliases": [ 00:14:12.250 "53b3fda3-7f17-40d6-addc-4f4e0d75fc19" 00:14:12.250 ], 00:14:12.250 "product_name": "Malloc disk", 00:14:12.250 "block_size": 512, 00:14:12.250 "num_blocks": 65536, 00:14:12.250 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:12.250 "assigned_rate_limits": { 00:14:12.250 "rw_ios_per_sec": 0, 00:14:12.250 "rw_mbytes_per_sec": 0, 00:14:12.250 "r_mbytes_per_sec": 0, 00:14:12.250 "w_mbytes_per_sec": 0 00:14:12.250 }, 00:14:12.250 "claimed": false, 00:14:12.250 "zoned": false, 00:14:12.250 "supported_io_types": { 00:14:12.250 "read": true, 00:14:12.250 "write": true, 00:14:12.250 "unmap": true, 00:14:12.250 "flush": true, 00:14:12.250 "reset": true, 00:14:12.250 "nvme_admin": false, 00:14:12.250 "nvme_io": false, 00:14:12.250 "nvme_io_md": false, 00:14:12.250 "write_zeroes": true, 00:14:12.250 "zcopy": true, 00:14:12.250 "get_zone_info": false, 00:14:12.250 "zone_management": false, 00:14:12.250 "zone_append": false, 00:14:12.250 "compare": false, 00:14:12.250 "compare_and_write": false, 00:14:12.250 "abort": true, 00:14:12.250 "seek_hole": false, 00:14:12.250 "seek_data": false, 00:14:12.250 "copy": true, 00:14:12.250 "nvme_iov_md": false 00:14:12.250 }, 00:14:12.250 "memory_domains": [ 00:14:12.250 { 00:14:12.250 "dma_device_id": "system", 00:14:12.250 "dma_device_type": 1 00:14:12.250 }, 00:14:12.250 { 00:14:12.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.250 "dma_device_type": 2 00:14:12.250 } 00:14:12.250 ], 00:14:12.250 "driver_specific": {} 00:14:12.250 } 00:14:12.250 ] 00:14:12.250 06:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:12.250 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:12.250 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.250 06:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.250 BaseBdev3 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:12.250 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.510 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.769 [ 00:14:12.769 { 00:14:12.769 "name": "BaseBdev3", 00:14:12.769 "aliases": [ 00:14:12.769 "652f6c8e-b920-41cf-b0da-f3b2833c4cd2" 00:14:12.769 ], 00:14:12.769 "product_name": "Malloc disk", 00:14:12.769 "block_size": 512, 00:14:12.769 "num_blocks": 65536, 00:14:12.769 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:12.769 "assigned_rate_limits": { 00:14:12.769 "rw_ios_per_sec": 0, 00:14:12.769 "rw_mbytes_per_sec": 0, 00:14:12.769 "r_mbytes_per_sec": 0, 00:14:12.769 "w_mbytes_per_sec": 0 00:14:12.769 }, 00:14:12.769 "claimed": false, 00:14:12.769 "zoned": false, 00:14:12.769 "supported_io_types": { 00:14:12.769 "read": true, 00:14:12.769 "write": true, 00:14:12.769 "unmap": true, 00:14:12.769 "flush": true, 00:14:12.769 "reset": true, 00:14:12.769 "nvme_admin": false, 00:14:12.769 "nvme_io": false, 00:14:12.769 "nvme_io_md": false, 00:14:12.769 "write_zeroes": true, 00:14:12.769 "zcopy": true, 00:14:12.769 "get_zone_info": false, 00:14:12.769 "zone_management": false, 00:14:12.769 "zone_append": false, 00:14:12.769 "compare": false, 00:14:12.769 "compare_and_write": false, 00:14:12.769 "abort": true, 00:14:12.769 "seek_hole": false, 00:14:12.769 "seek_data": false, 00:14:12.769 "copy": true, 00:14:12.769 "nvme_iov_md": false 00:14:12.769 }, 00:14:12.769 "memory_domains": [ 00:14:12.769 { 00:14:12.769 "dma_device_id": "system", 00:14:12.769 "dma_device_type": 1 00:14:12.769 }, 00:14:12.769 { 00:14:12.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.769 "dma_device_type": 2 00:14:12.769 } 00:14:12.769 ], 00:14:12.769 "driver_specific": {} 00:14:12.769 } 00:14:12.769 ] 00:14:12.769 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:12.769 06:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:12.769 06:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.769 06:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:13.028 BaseBdev4 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:13.028 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:13.288 06:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:13.288 [ 00:14:13.288 { 00:14:13.288 "name": "BaseBdev4", 00:14:13.288 "aliases": [ 00:14:13.288 "34591121-6bd2-47c5-a671-ce195cb7a38c" 00:14:13.288 ], 00:14:13.288 "product_name": "Malloc disk", 00:14:13.288 "block_size": 512, 00:14:13.288 "num_blocks": 65536, 00:14:13.288 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:13.288 "assigned_rate_limits": { 00:14:13.288 "rw_ios_per_sec": 0, 00:14:13.288 "rw_mbytes_per_sec": 0, 00:14:13.288 "r_mbytes_per_sec": 0, 00:14:13.288 "w_mbytes_per_sec": 0 00:14:13.288 }, 00:14:13.288 "claimed": false, 00:14:13.288 "zoned": false, 00:14:13.288 "supported_io_types": { 00:14:13.288 "read": true, 00:14:13.288 "write": true, 00:14:13.288 "unmap": true, 00:14:13.288 "flush": true, 00:14:13.288 "reset": true, 00:14:13.288 "nvme_admin": false, 00:14:13.288 "nvme_io": false, 00:14:13.288 "nvme_io_md": false, 00:14:13.288 "write_zeroes": true, 00:14:13.288 "zcopy": true, 00:14:13.288 "get_zone_info": false, 00:14:13.288 "zone_management": false, 00:14:13.288 "zone_append": false, 00:14:13.288 "compare": false, 00:14:13.288 "compare_and_write": false, 00:14:13.288 "abort": true, 00:14:13.288 "seek_hole": false, 00:14:13.288 "seek_data": false, 00:14:13.288 "copy": true, 00:14:13.288 "nvme_iov_md": false 00:14:13.288 }, 00:14:13.288 "memory_domains": [ 00:14:13.288 { 00:14:13.288 "dma_device_id": "system", 00:14:13.288 "dma_device_type": 1 00:14:13.288 }, 00:14:13.288 { 00:14:13.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.288 "dma_device_type": 2 00:14:13.288 } 00:14:13.288 ], 00:14:13.288 "driver_specific": {} 00:14:13.288 } 00:14:13.288 ] 00:14:13.288 06:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:13.288 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:13.288 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:13.288 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:13.548 [2024-08-13 06:09:15.212919] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.548 [2024-08-13 06:09:15.213056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.548 [2024-08-13 06:09:15.213094] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.548 [2024-08-13 06:09:15.214720] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.548 [2024-08-13 06:09:15.214812] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.548 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.817 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.818 "name": "Existed_Raid", 00:14:13.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.818 "strip_size_kb": 64, 00:14:13.818 "state": "configuring", 00:14:13.818 "raid_level": "concat", 00:14:13.818 "superblock": false, 00:14:13.818 "num_base_bdevs": 4, 00:14:13.818 "num_base_bdevs_discovered": 3, 00:14:13.818 "num_base_bdevs_operational": 4, 00:14:13.818 "base_bdevs_list": [ 00:14:13.818 { 00:14:13.818 "name": "BaseBdev1", 00:14:13.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.818 "is_configured": false, 00:14:13.818 "data_offset": 0, 00:14:13.818 "data_size": 0 00:14:13.818 }, 00:14:13.818 { 00:14:13.818 "name": "BaseBdev2", 00:14:13.818 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:13.818 "is_configured": true, 00:14:13.818 "data_offset": 0, 00:14:13.818 "data_size": 65536 00:14:13.818 }, 00:14:13.818 { 00:14:13.818 "name": "BaseBdev3", 00:14:13.818 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:13.818 "is_configured": true, 00:14:13.818 "data_offset": 0, 00:14:13.818 "data_size": 65536 00:14:13.818 }, 00:14:13.818 { 00:14:13.818 "name": "BaseBdev4", 00:14:13.818 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:13.818 "is_configured": true, 00:14:13.818 "data_offset": 0, 00:14:13.818 "data_size": 65536 00:14:13.818 } 00:14:13.818 ] 00:14:13.818 }' 00:14:13.818 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.818 06:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 06:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:14.427 [2024-08-13 06:09:16.155294] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.427 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.702 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.702 "name": "Existed_Raid", 00:14:14.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.702 "strip_size_kb": 64, 00:14:14.702 "state": "configuring", 00:14:14.702 "raid_level": "concat", 00:14:14.702 "superblock": false, 00:14:14.702 "num_base_bdevs": 4, 00:14:14.702 "num_base_bdevs_discovered": 2, 00:14:14.702 "num_base_bdevs_operational": 4, 00:14:14.702 "base_bdevs_list": [ 00:14:14.702 { 00:14:14.702 "name": "BaseBdev1", 00:14:14.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.702 "is_configured": false, 00:14:14.702 "data_offset": 0, 00:14:14.702 "data_size": 0 00:14:14.702 }, 00:14:14.702 { 00:14:14.702 "name": null, 00:14:14.702 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:14.702 "is_configured": false, 00:14:14.702 "data_offset": 0, 00:14:14.702 "data_size": 65536 00:14:14.702 }, 00:14:14.702 { 00:14:14.702 "name": "BaseBdev3", 00:14:14.702 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:14.702 "is_configured": true, 00:14:14.702 "data_offset": 0, 00:14:14.702 "data_size": 65536 00:14:14.702 }, 00:14:14.702 { 00:14:14.702 "name": "BaseBdev4", 00:14:14.702 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:14.702 "is_configured": true, 00:14:14.702 "data_offset": 0, 00:14:14.702 "data_size": 65536 00:14:14.702 } 00:14:14.702 ] 00:14:14.702 }' 00:14:14.702 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.702 06:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.270 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.270 06:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.530 BaseBdev1 00:14:15.530 [2024-08-13 06:09:17.252269] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:15.530 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:15.789 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.049 [ 00:14:16.049 { 00:14:16.049 "name": "BaseBdev1", 00:14:16.049 "aliases": [ 00:14:16.049 "fa539d51-52f5-4224-b3b5-9716a0a96975" 00:14:16.049 ], 00:14:16.049 "product_name": "Malloc disk", 00:14:16.049 "block_size": 512, 00:14:16.049 "num_blocks": 65536, 00:14:16.049 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:16.049 "assigned_rate_limits": { 00:14:16.049 "rw_ios_per_sec": 0, 00:14:16.049 "rw_mbytes_per_sec": 0, 00:14:16.049 "r_mbytes_per_sec": 0, 00:14:16.049 "w_mbytes_per_sec": 0 00:14:16.049 }, 00:14:16.049 "claimed": true, 00:14:16.049 "claim_type": "exclusive_write", 00:14:16.049 "zoned": false, 00:14:16.049 "supported_io_types": { 00:14:16.049 "read": true, 00:14:16.049 "write": true, 00:14:16.049 "unmap": true, 00:14:16.049 "flush": true, 00:14:16.049 "reset": true, 00:14:16.049 "nvme_admin": false, 00:14:16.049 "nvme_io": false, 00:14:16.049 "nvme_io_md": false, 00:14:16.049 "write_zeroes": true, 00:14:16.049 "zcopy": true, 00:14:16.049 "get_zone_info": false, 00:14:16.049 "zone_management": false, 00:14:16.049 "zone_append": false, 00:14:16.049 "compare": false, 00:14:16.049 "compare_and_write": false, 00:14:16.049 "abort": true, 00:14:16.049 "seek_hole": false, 00:14:16.049 "seek_data": false, 00:14:16.049 "copy": true, 00:14:16.049 "nvme_iov_md": false 00:14:16.049 }, 00:14:16.049 "memory_domains": [ 00:14:16.049 { 00:14:16.049 "dma_device_id": "system", 00:14:16.049 "dma_device_type": 1 00:14:16.049 }, 00:14:16.049 { 00:14:16.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.049 "dma_device_type": 2 00:14:16.049 } 00:14:16.049 ], 00:14:16.049 "driver_specific": {} 00:14:16.049 } 00:14:16.049 ] 00:14:16.049 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:16.049 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.049 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.050 "name": "Existed_Raid", 00:14:16.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.050 "strip_size_kb": 64, 00:14:16.050 "state": "configuring", 00:14:16.050 "raid_level": "concat", 00:14:16.050 "superblock": false, 00:14:16.050 "num_base_bdevs": 4, 00:14:16.050 "num_base_bdevs_discovered": 3, 00:14:16.050 "num_base_bdevs_operational": 4, 00:14:16.050 "base_bdevs_list": [ 00:14:16.050 { 00:14:16.050 "name": "BaseBdev1", 00:14:16.050 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:16.050 "is_configured": true, 00:14:16.050 "data_offset": 0, 00:14:16.050 "data_size": 65536 00:14:16.050 }, 00:14:16.050 { 00:14:16.050 "name": null, 00:14:16.050 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:16.050 "is_configured": false, 00:14:16.050 "data_offset": 0, 00:14:16.050 "data_size": 65536 00:14:16.050 }, 00:14:16.050 { 00:14:16.050 "name": "BaseBdev3", 00:14:16.050 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:16.050 "is_configured": true, 00:14:16.050 "data_offset": 0, 00:14:16.050 "data_size": 65536 00:14:16.050 }, 00:14:16.050 { 00:14:16.050 "name": "BaseBdev4", 00:14:16.050 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:16.050 "is_configured": true, 00:14:16.050 "data_offset": 0, 00:14:16.050 "data_size": 65536 00:14:16.050 } 00:14:16.050 ] 00:14:16.050 }' 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.050 06:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.619 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.619 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.878 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:16.878 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:17.138 [2024-08-13 06:09:18.737794] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.138 "name": "Existed_Raid", 00:14:17.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.138 "strip_size_kb": 64, 00:14:17.138 "state": "configuring", 00:14:17.138 "raid_level": "concat", 00:14:17.138 "superblock": false, 00:14:17.138 "num_base_bdevs": 4, 00:14:17.138 "num_base_bdevs_discovered": 2, 00:14:17.138 "num_base_bdevs_operational": 4, 00:14:17.138 "base_bdevs_list": [ 00:14:17.138 { 00:14:17.138 "name": "BaseBdev1", 00:14:17.138 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:17.138 "is_configured": true, 00:14:17.138 "data_offset": 0, 00:14:17.138 "data_size": 65536 00:14:17.138 }, 00:14:17.138 { 00:14:17.138 "name": null, 00:14:17.138 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:17.138 "is_configured": false, 00:14:17.138 "data_offset": 0, 00:14:17.138 "data_size": 65536 00:14:17.138 }, 00:14:17.138 { 00:14:17.138 "name": null, 00:14:17.138 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:17.138 "is_configured": false, 00:14:17.138 "data_offset": 0, 00:14:17.138 "data_size": 65536 00:14:17.138 }, 00:14:17.138 { 00:14:17.138 "name": "BaseBdev4", 00:14:17.138 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:17.138 "is_configured": true, 00:14:17.138 "data_offset": 0, 00:14:17.138 "data_size": 65536 00:14:17.138 } 00:14:17.138 ] 00:14:17.138 }' 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.138 06:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.708 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.708 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.966 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:17.966 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.226 [2024-08-13 06:09:19.824052] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.226 06:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.486 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.486 "name": "Existed_Raid", 00:14:18.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.486 "strip_size_kb": 64, 00:14:18.486 "state": "configuring", 00:14:18.486 "raid_level": "concat", 00:14:18.486 "superblock": false, 00:14:18.486 "num_base_bdevs": 4, 00:14:18.486 "num_base_bdevs_discovered": 3, 00:14:18.486 "num_base_bdevs_operational": 4, 00:14:18.486 "base_bdevs_list": [ 00:14:18.486 { 00:14:18.486 "name": "BaseBdev1", 00:14:18.486 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:18.486 "is_configured": true, 00:14:18.486 "data_offset": 0, 00:14:18.486 "data_size": 65536 00:14:18.486 }, 00:14:18.486 { 00:14:18.486 "name": null, 00:14:18.486 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:18.486 "is_configured": false, 00:14:18.486 "data_offset": 0, 00:14:18.486 "data_size": 65536 00:14:18.486 }, 00:14:18.486 { 00:14:18.486 "name": "BaseBdev3", 00:14:18.486 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:18.486 "is_configured": true, 00:14:18.486 "data_offset": 0, 00:14:18.486 "data_size": 65536 00:14:18.486 }, 00:14:18.486 { 00:14:18.486 "name": "BaseBdev4", 00:14:18.486 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:18.486 "is_configured": true, 00:14:18.486 "data_offset": 0, 00:14:18.486 "data_size": 65536 00:14:18.486 } 00:14:18.486 ] 00:14:18.486 }' 00:14:18.486 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.486 06:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.056 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.057 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.057 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:19.057 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.317 [2024-08-13 06:09:20.946139] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.317 06:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.577 06:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.577 "name": "Existed_Raid", 00:14:19.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.577 "strip_size_kb": 64, 00:14:19.577 "state": "configuring", 00:14:19.577 "raid_level": "concat", 00:14:19.577 "superblock": false, 00:14:19.577 "num_base_bdevs": 4, 00:14:19.577 "num_base_bdevs_discovered": 2, 00:14:19.577 "num_base_bdevs_operational": 4, 00:14:19.577 "base_bdevs_list": [ 00:14:19.577 { 00:14:19.577 "name": null, 00:14:19.577 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:19.577 "is_configured": false, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": null, 00:14:19.577 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:19.577 "is_configured": false, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev3", 00:14:19.577 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev4", 00:14:19.577 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 } 00:14:19.577 ] 00:14:19.577 }' 00:14:19.577 06:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.577 06:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.147 06:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.147 06:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.147 06:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:20.147 06:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:20.407 [2024-08-13 06:09:22.043020] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.407 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.667 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.667 "name": "Existed_Raid", 00:14:20.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.667 "strip_size_kb": 64, 00:14:20.667 "state": "configuring", 00:14:20.667 "raid_level": "concat", 00:14:20.667 "superblock": false, 00:14:20.668 "num_base_bdevs": 4, 00:14:20.668 "num_base_bdevs_discovered": 3, 00:14:20.668 "num_base_bdevs_operational": 4, 00:14:20.668 "base_bdevs_list": [ 00:14:20.668 { 00:14:20.668 "name": null, 00:14:20.668 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:20.668 "is_configured": false, 00:14:20.668 "data_offset": 0, 00:14:20.668 "data_size": 65536 00:14:20.668 }, 00:14:20.668 { 00:14:20.668 "name": "BaseBdev2", 00:14:20.668 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:20.668 "is_configured": true, 00:14:20.668 "data_offset": 0, 00:14:20.668 "data_size": 65536 00:14:20.668 }, 00:14:20.668 { 00:14:20.668 "name": "BaseBdev3", 00:14:20.668 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:20.668 "is_configured": true, 00:14:20.668 "data_offset": 0, 00:14:20.668 "data_size": 65536 00:14:20.668 }, 00:14:20.668 { 00:14:20.668 "name": "BaseBdev4", 00:14:20.668 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:20.668 "is_configured": true, 00:14:20.668 "data_offset": 0, 00:14:20.668 "data_size": 65536 00:14:20.668 } 00:14:20.668 ] 00:14:20.668 }' 00:14:20.668 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.668 06:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.237 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.237 06:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.237 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:21.237 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.237 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:21.497 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fa539d51-52f5-4224-b3b5-9716a0a96975 00:14:21.757 [2024-08-13 06:09:23.375559] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:21.757 [2024-08-13 06:09:23.375674] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:21.757 [2024-08-13 06:09:23.375690] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:21.757 [2024-08-13 06:09:23.375930] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:21.757 [2024-08-13 06:09:23.376056] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:21.757 [2024-08-13 06:09:23.376064] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:21.757 [2024-08-13 06:09:23.376218] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.757 NewBaseBdev 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:21.757 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.022 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:22.022 [ 00:14:22.022 { 00:14:22.022 "name": "NewBaseBdev", 00:14:22.022 "aliases": [ 00:14:22.022 "fa539d51-52f5-4224-b3b5-9716a0a96975" 00:14:22.022 ], 00:14:22.023 "product_name": "Malloc disk", 00:14:22.023 "block_size": 512, 00:14:22.023 "num_blocks": 65536, 00:14:22.023 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:22.023 "assigned_rate_limits": { 00:14:22.023 "rw_ios_per_sec": 0, 00:14:22.023 "rw_mbytes_per_sec": 0, 00:14:22.023 "r_mbytes_per_sec": 0, 00:14:22.023 "w_mbytes_per_sec": 0 00:14:22.023 }, 00:14:22.023 "claimed": true, 00:14:22.023 "claim_type": "exclusive_write", 00:14:22.023 "zoned": false, 00:14:22.023 "supported_io_types": { 00:14:22.023 "read": true, 00:14:22.023 "write": true, 00:14:22.023 "unmap": true, 00:14:22.023 "flush": true, 00:14:22.023 "reset": true, 00:14:22.023 "nvme_admin": false, 00:14:22.023 "nvme_io": false, 00:14:22.023 "nvme_io_md": false, 00:14:22.023 "write_zeroes": true, 00:14:22.023 "zcopy": true, 00:14:22.023 "get_zone_info": false, 00:14:22.023 "zone_management": false, 00:14:22.023 "zone_append": false, 00:14:22.023 "compare": false, 00:14:22.023 "compare_and_write": false, 00:14:22.023 "abort": true, 00:14:22.023 "seek_hole": false, 00:14:22.023 "seek_data": false, 00:14:22.023 "copy": true, 00:14:22.023 "nvme_iov_md": false 00:14:22.023 }, 00:14:22.023 "memory_domains": [ 00:14:22.023 { 00:14:22.023 "dma_device_id": "system", 00:14:22.023 "dma_device_type": 1 00:14:22.023 }, 00:14:22.023 { 00:14:22.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.023 "dma_device_type": 2 00:14:22.023 } 00:14:22.023 ], 00:14:22.023 "driver_specific": {} 00:14:22.023 } 00:14:22.023 ] 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.285 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.286 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.286 06:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.286 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.286 "name": "Existed_Raid", 00:14:22.286 "uuid": "5df19306-7455-4ec4-8b24-cf74f681c296", 00:14:22.286 "strip_size_kb": 64, 00:14:22.286 "state": "online", 00:14:22.286 "raid_level": "concat", 00:14:22.286 "superblock": false, 00:14:22.286 "num_base_bdevs": 4, 00:14:22.286 "num_base_bdevs_discovered": 4, 00:14:22.286 "num_base_bdevs_operational": 4, 00:14:22.286 "base_bdevs_list": [ 00:14:22.286 { 00:14:22.286 "name": "NewBaseBdev", 00:14:22.286 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:22.286 "is_configured": true, 00:14:22.286 "data_offset": 0, 00:14:22.286 "data_size": 65536 00:14:22.286 }, 00:14:22.286 { 00:14:22.286 "name": "BaseBdev2", 00:14:22.286 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:22.286 "is_configured": true, 00:14:22.286 "data_offset": 0, 00:14:22.286 "data_size": 65536 00:14:22.286 }, 00:14:22.286 { 00:14:22.286 "name": "BaseBdev3", 00:14:22.286 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:22.286 "is_configured": true, 00:14:22.286 "data_offset": 0, 00:14:22.286 "data_size": 65536 00:14:22.286 }, 00:14:22.286 { 00:14:22.286 "name": "BaseBdev4", 00:14:22.286 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:22.286 "is_configured": true, 00:14:22.286 "data_offset": 0, 00:14:22.286 "data_size": 65536 00:14:22.286 } 00:14:22.286 ] 00:14:22.286 }' 00:14:22.286 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.286 06:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:22.855 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.115 [2024-08-13 06:09:24.745526] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.115 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.115 "name": "Existed_Raid", 00:14:23.115 "aliases": [ 00:14:23.115 "5df19306-7455-4ec4-8b24-cf74f681c296" 00:14:23.115 ], 00:14:23.115 "product_name": "Raid Volume", 00:14:23.115 "block_size": 512, 00:14:23.115 "num_blocks": 262144, 00:14:23.115 "uuid": "5df19306-7455-4ec4-8b24-cf74f681c296", 00:14:23.115 "assigned_rate_limits": { 00:14:23.115 "rw_ios_per_sec": 0, 00:14:23.115 "rw_mbytes_per_sec": 0, 00:14:23.115 "r_mbytes_per_sec": 0, 00:14:23.115 "w_mbytes_per_sec": 0 00:14:23.115 }, 00:14:23.115 "claimed": false, 00:14:23.115 "zoned": false, 00:14:23.115 "supported_io_types": { 00:14:23.115 "read": true, 00:14:23.115 "write": true, 00:14:23.115 "unmap": true, 00:14:23.115 "flush": true, 00:14:23.115 "reset": true, 00:14:23.115 "nvme_admin": false, 00:14:23.115 "nvme_io": false, 00:14:23.115 "nvme_io_md": false, 00:14:23.115 "write_zeroes": true, 00:14:23.115 "zcopy": false, 00:14:23.115 "get_zone_info": false, 00:14:23.115 "zone_management": false, 00:14:23.115 "zone_append": false, 00:14:23.115 "compare": false, 00:14:23.115 "compare_and_write": false, 00:14:23.115 "abort": false, 00:14:23.115 "seek_hole": false, 00:14:23.115 "seek_data": false, 00:14:23.115 "copy": false, 00:14:23.115 "nvme_iov_md": false 00:14:23.115 }, 00:14:23.115 "memory_domains": [ 00:14:23.115 { 00:14:23.115 "dma_device_id": "system", 00:14:23.115 "dma_device_type": 1 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.115 "dma_device_type": 2 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "system", 00:14:23.115 "dma_device_type": 1 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.115 "dma_device_type": 2 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "system", 00:14:23.115 "dma_device_type": 1 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.115 "dma_device_type": 2 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "system", 00:14:23.115 "dma_device_type": 1 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.115 "dma_device_type": 2 00:14:23.115 } 00:14:23.115 ], 00:14:23.115 "driver_specific": { 00:14:23.115 "raid": { 00:14:23.115 "uuid": "5df19306-7455-4ec4-8b24-cf74f681c296", 00:14:23.115 "strip_size_kb": 64, 00:14:23.115 "state": "online", 00:14:23.115 "raid_level": "concat", 00:14:23.115 "superblock": false, 00:14:23.115 "num_base_bdevs": 4, 00:14:23.115 "num_base_bdevs_discovered": 4, 00:14:23.115 "num_base_bdevs_operational": 4, 00:14:23.115 "base_bdevs_list": [ 00:14:23.115 { 00:14:23.115 "name": "NewBaseBdev", 00:14:23.115 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:23.115 "is_configured": true, 00:14:23.115 "data_offset": 0, 00:14:23.115 "data_size": 65536 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "name": "BaseBdev2", 00:14:23.115 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:23.115 "is_configured": true, 00:14:23.115 "data_offset": 0, 00:14:23.115 "data_size": 65536 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "name": "BaseBdev3", 00:14:23.115 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:23.115 "is_configured": true, 00:14:23.115 "data_offset": 0, 00:14:23.115 "data_size": 65536 00:14:23.115 }, 00:14:23.115 { 00:14:23.115 "name": "BaseBdev4", 00:14:23.115 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:23.115 "is_configured": true, 00:14:23.115 "data_offset": 0, 00:14:23.115 "data_size": 65536 00:14:23.115 } 00:14:23.115 ] 00:14:23.115 } 00:14:23.115 } 00:14:23.115 }' 00:14:23.115 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.115 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:23.115 BaseBdev2 00:14:23.115 BaseBdev3 00:14:23.115 BaseBdev4' 00:14:23.115 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.115 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:23.115 06:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.374 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.375 "name": "NewBaseBdev", 00:14:23.375 "aliases": [ 00:14:23.375 "fa539d51-52f5-4224-b3b5-9716a0a96975" 00:14:23.375 ], 00:14:23.375 "product_name": "Malloc disk", 00:14:23.375 "block_size": 512, 00:14:23.375 "num_blocks": 65536, 00:14:23.375 "uuid": "fa539d51-52f5-4224-b3b5-9716a0a96975", 00:14:23.375 "assigned_rate_limits": { 00:14:23.375 "rw_ios_per_sec": 0, 00:14:23.375 "rw_mbytes_per_sec": 0, 00:14:23.375 "r_mbytes_per_sec": 0, 00:14:23.375 "w_mbytes_per_sec": 0 00:14:23.375 }, 00:14:23.375 "claimed": true, 00:14:23.375 "claim_type": "exclusive_write", 00:14:23.375 "zoned": false, 00:14:23.375 "supported_io_types": { 00:14:23.375 "read": true, 00:14:23.375 "write": true, 00:14:23.375 "unmap": true, 00:14:23.375 "flush": true, 00:14:23.375 "reset": true, 00:14:23.375 "nvme_admin": false, 00:14:23.375 "nvme_io": false, 00:14:23.375 "nvme_io_md": false, 00:14:23.375 "write_zeroes": true, 00:14:23.375 "zcopy": true, 00:14:23.375 "get_zone_info": false, 00:14:23.375 "zone_management": false, 00:14:23.375 "zone_append": false, 00:14:23.375 "compare": false, 00:14:23.375 "compare_and_write": false, 00:14:23.375 "abort": true, 00:14:23.375 "seek_hole": false, 00:14:23.375 "seek_data": false, 00:14:23.375 "copy": true, 00:14:23.375 "nvme_iov_md": false 00:14:23.375 }, 00:14:23.375 "memory_domains": [ 00:14:23.375 { 00:14:23.375 "dma_device_id": "system", 00:14:23.375 "dma_device_type": 1 00:14:23.375 }, 00:14:23.375 { 00:14:23.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.375 "dma_device_type": 2 00:14:23.375 } 00:14:23.375 ], 00:14:23.375 "driver_specific": {} 00:14:23.375 }' 00:14:23.375 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.375 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.375 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.375 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.375 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:23.634 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.894 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.894 "name": "BaseBdev2", 00:14:23.894 "aliases": [ 00:14:23.894 "53b3fda3-7f17-40d6-addc-4f4e0d75fc19" 00:14:23.894 ], 00:14:23.894 "product_name": "Malloc disk", 00:14:23.894 "block_size": 512, 00:14:23.894 "num_blocks": 65536, 00:14:23.894 "uuid": "53b3fda3-7f17-40d6-addc-4f4e0d75fc19", 00:14:23.894 "assigned_rate_limits": { 00:14:23.894 "rw_ios_per_sec": 0, 00:14:23.894 "rw_mbytes_per_sec": 0, 00:14:23.894 "r_mbytes_per_sec": 0, 00:14:23.894 "w_mbytes_per_sec": 0 00:14:23.894 }, 00:14:23.894 "claimed": true, 00:14:23.894 "claim_type": "exclusive_write", 00:14:23.894 "zoned": false, 00:14:23.894 "supported_io_types": { 00:14:23.894 "read": true, 00:14:23.894 "write": true, 00:14:23.894 "unmap": true, 00:14:23.894 "flush": true, 00:14:23.894 "reset": true, 00:14:23.894 "nvme_admin": false, 00:14:23.894 "nvme_io": false, 00:14:23.894 "nvme_io_md": false, 00:14:23.894 "write_zeroes": true, 00:14:23.894 "zcopy": true, 00:14:23.894 "get_zone_info": false, 00:14:23.894 "zone_management": false, 00:14:23.894 "zone_append": false, 00:14:23.894 "compare": false, 00:14:23.894 "compare_and_write": false, 00:14:23.894 "abort": true, 00:14:23.894 "seek_hole": false, 00:14:23.894 "seek_data": false, 00:14:23.894 "copy": true, 00:14:23.894 "nvme_iov_md": false 00:14:23.894 }, 00:14:23.894 "memory_domains": [ 00:14:23.894 { 00:14:23.894 "dma_device_id": "system", 00:14:23.894 "dma_device_type": 1 00:14:23.894 }, 00:14:23.894 { 00:14:23.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.894 "dma_device_type": 2 00:14:23.894 } 00:14:23.894 ], 00:14:23.894 "driver_specific": {} 00:14:23.894 }' 00:14:23.894 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.894 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.894 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.894 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.894 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.154 06:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:24.414 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.414 "name": "BaseBdev3", 00:14:24.414 "aliases": [ 00:14:24.414 "652f6c8e-b920-41cf-b0da-f3b2833c4cd2" 00:14:24.414 ], 00:14:24.414 "product_name": "Malloc disk", 00:14:24.414 "block_size": 512, 00:14:24.414 "num_blocks": 65536, 00:14:24.414 "uuid": "652f6c8e-b920-41cf-b0da-f3b2833c4cd2", 00:14:24.414 "assigned_rate_limits": { 00:14:24.414 "rw_ios_per_sec": 0, 00:14:24.414 "rw_mbytes_per_sec": 0, 00:14:24.414 "r_mbytes_per_sec": 0, 00:14:24.414 "w_mbytes_per_sec": 0 00:14:24.414 }, 00:14:24.414 "claimed": true, 00:14:24.414 "claim_type": "exclusive_write", 00:14:24.414 "zoned": false, 00:14:24.414 "supported_io_types": { 00:14:24.414 "read": true, 00:14:24.414 "write": true, 00:14:24.414 "unmap": true, 00:14:24.414 "flush": true, 00:14:24.414 "reset": true, 00:14:24.414 "nvme_admin": false, 00:14:24.414 "nvme_io": false, 00:14:24.414 "nvme_io_md": false, 00:14:24.414 "write_zeroes": true, 00:14:24.414 "zcopy": true, 00:14:24.414 "get_zone_info": false, 00:14:24.414 "zone_management": false, 00:14:24.414 "zone_append": false, 00:14:24.414 "compare": false, 00:14:24.414 "compare_and_write": false, 00:14:24.414 "abort": true, 00:14:24.414 "seek_hole": false, 00:14:24.414 "seek_data": false, 00:14:24.414 "copy": true, 00:14:24.414 "nvme_iov_md": false 00:14:24.414 }, 00:14:24.414 "memory_domains": [ 00:14:24.414 { 00:14:24.414 "dma_device_id": "system", 00:14:24.414 "dma_device_type": 1 00:14:24.414 }, 00:14:24.414 { 00:14:24.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.414 "dma_device_type": 2 00:14:24.414 } 00:14:24.414 ], 00:14:24.414 "driver_specific": {} 00:14:24.414 }' 00:14:24.414 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.414 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.414 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.414 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:24.675 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.935 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.935 "name": "BaseBdev4", 00:14:24.935 "aliases": [ 00:14:24.935 "34591121-6bd2-47c5-a671-ce195cb7a38c" 00:14:24.935 ], 00:14:24.935 "product_name": "Malloc disk", 00:14:24.935 "block_size": 512, 00:14:24.935 "num_blocks": 65536, 00:14:24.935 "uuid": "34591121-6bd2-47c5-a671-ce195cb7a38c", 00:14:24.935 "assigned_rate_limits": { 00:14:24.935 "rw_ios_per_sec": 0, 00:14:24.935 "rw_mbytes_per_sec": 0, 00:14:24.935 "r_mbytes_per_sec": 0, 00:14:24.935 "w_mbytes_per_sec": 0 00:14:24.935 }, 00:14:24.935 "claimed": true, 00:14:24.935 "claim_type": "exclusive_write", 00:14:24.935 "zoned": false, 00:14:24.935 "supported_io_types": { 00:14:24.935 "read": true, 00:14:24.935 "write": true, 00:14:24.935 "unmap": true, 00:14:24.935 "flush": true, 00:14:24.935 "reset": true, 00:14:24.935 "nvme_admin": false, 00:14:24.935 "nvme_io": false, 00:14:24.935 "nvme_io_md": false, 00:14:24.935 "write_zeroes": true, 00:14:24.935 "zcopy": true, 00:14:24.935 "get_zone_info": false, 00:14:24.935 "zone_management": false, 00:14:24.935 "zone_append": false, 00:14:24.935 "compare": false, 00:14:24.935 "compare_and_write": false, 00:14:24.935 "abort": true, 00:14:24.935 "seek_hole": false, 00:14:24.935 "seek_data": false, 00:14:24.935 "copy": true, 00:14:24.935 "nvme_iov_md": false 00:14:24.935 }, 00:14:24.935 "memory_domains": [ 00:14:24.935 { 00:14:24.935 "dma_device_id": "system", 00:14:24.935 "dma_device_type": 1 00:14:24.935 }, 00:14:24.935 { 00:14:24.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.935 "dma_device_type": 2 00:14:24.935 } 00:14:24.935 ], 00:14:24.935 "driver_specific": {} 00:14:24.935 }' 00:14:24.935 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.935 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.194 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.195 06:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.455 [2024-08-13 06:09:27.189224] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.455 [2024-08-13 06:09:27.189253] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.455 [2024-08-13 06:09:27.189348] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.455 [2024-08-13 06:09:27.189411] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.455 [2024-08-13 06:09:27.189423] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 86045 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 86045 ']' 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 86045 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:25.455 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86045 00:14:25.715 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:25.715 killing process with pid 86045 00:14:25.715 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:25.715 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86045' 00:14:25.715 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 86045 00:14:25.715 [2024-08-13 06:09:27.253384] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.715 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 86045 00:14:25.715 [2024-08-13 06:09:27.293857] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:25.976 00:14:25.976 real 0m27.710s 00:14:25.976 user 0m51.207s 00:14:25.976 sys 0m4.556s 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:25.976 ************************************ 00:14:25.976 END TEST raid_state_function_test 00:14:25.976 ************************************ 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.976 06:09:27 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:25.976 06:09:27 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:25.976 06:09:27 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:25.976 06:09:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.976 ************************************ 00:14:25.976 START TEST raid_state_function_test_sb 00:14:25.976 ************************************ 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:25.976 Process raid pid: 87049 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=87049 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 87049' 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 87049 /var/tmp/spdk-raid.sock 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 87049 ']' 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:25.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.976 06:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.976 [2024-08-13 06:09:27.705157] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:14:25.976 [2024-08-13 06:09:27.705370] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.236 [2024-08-13 06:09:27.852833] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.236 [2024-08-13 06:09:27.899122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.236 [2024-08-13 06:09:27.941181] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.236 [2024-08-13 06:09:27.941217] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.806 06:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.806 06:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:14:26.806 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:27.066 [2024-08-13 06:09:28.704704] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.066 [2024-08-13 06:09:28.704761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.066 [2024-08-13 06:09:28.704772] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.066 [2024-08-13 06:09:28.704779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.066 [2024-08-13 06:09:28.704790] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.066 [2024-08-13 06:09:28.704796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.066 [2024-08-13 06:09:28.704805] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:27.066 [2024-08-13 06:09:28.704812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.066 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.326 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.326 "name": "Existed_Raid", 00:14:27.326 "uuid": "e490464c-0acd-4bac-90d3-1046e3f29abf", 00:14:27.326 "strip_size_kb": 64, 00:14:27.326 "state": "configuring", 00:14:27.326 "raid_level": "concat", 00:14:27.326 "superblock": true, 00:14:27.326 "num_base_bdevs": 4, 00:14:27.326 "num_base_bdevs_discovered": 0, 00:14:27.326 "num_base_bdevs_operational": 4, 00:14:27.326 "base_bdevs_list": [ 00:14:27.326 { 00:14:27.326 "name": "BaseBdev1", 00:14:27.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.326 "is_configured": false, 00:14:27.326 "data_offset": 0, 00:14:27.326 "data_size": 0 00:14:27.326 }, 00:14:27.326 { 00:14:27.326 "name": "BaseBdev2", 00:14:27.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.326 "is_configured": false, 00:14:27.326 "data_offset": 0, 00:14:27.326 "data_size": 0 00:14:27.326 }, 00:14:27.326 { 00:14:27.326 "name": "BaseBdev3", 00:14:27.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.326 "is_configured": false, 00:14:27.326 "data_offset": 0, 00:14:27.326 "data_size": 0 00:14:27.326 }, 00:14:27.326 { 00:14:27.326 "name": "BaseBdev4", 00:14:27.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.326 "is_configured": false, 00:14:27.326 "data_offset": 0, 00:14:27.326 "data_size": 0 00:14:27.326 } 00:14:27.326 ] 00:14:27.326 }' 00:14:27.326 06:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.326 06:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.896 06:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:27.896 [2024-08-13 06:09:29.626962] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.896 [2024-08-13 06:09:29.627089] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:27.896 06:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:28.156 [2024-08-13 06:09:29.818658] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:28.156 [2024-08-13 06:09:29.818756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:28.156 [2024-08-13 06:09:29.818782] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.156 [2024-08-13 06:09:29.818801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.156 [2024-08-13 06:09:29.818819] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.156 [2024-08-13 06:09:29.818836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.156 [2024-08-13 06:09:29.818854] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:28.156 [2024-08-13 06:09:29.818871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:28.156 06:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:28.416 [2024-08-13 06:09:30.031307] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.416 BaseBdev1 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:28.416 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:28.675 [ 00:14:28.675 { 00:14:28.675 "name": "BaseBdev1", 00:14:28.675 "aliases": [ 00:14:28.675 "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3" 00:14:28.675 ], 00:14:28.675 "product_name": "Malloc disk", 00:14:28.675 "block_size": 512, 00:14:28.675 "num_blocks": 65536, 00:14:28.675 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:28.675 "assigned_rate_limits": { 00:14:28.675 "rw_ios_per_sec": 0, 00:14:28.675 "rw_mbytes_per_sec": 0, 00:14:28.675 "r_mbytes_per_sec": 0, 00:14:28.675 "w_mbytes_per_sec": 0 00:14:28.675 }, 00:14:28.675 "claimed": true, 00:14:28.675 "claim_type": "exclusive_write", 00:14:28.675 "zoned": false, 00:14:28.675 "supported_io_types": { 00:14:28.675 "read": true, 00:14:28.675 "write": true, 00:14:28.675 "unmap": true, 00:14:28.675 "flush": true, 00:14:28.675 "reset": true, 00:14:28.675 "nvme_admin": false, 00:14:28.675 "nvme_io": false, 00:14:28.675 "nvme_io_md": false, 00:14:28.675 "write_zeroes": true, 00:14:28.675 "zcopy": true, 00:14:28.675 "get_zone_info": false, 00:14:28.675 "zone_management": false, 00:14:28.675 "zone_append": false, 00:14:28.675 "compare": false, 00:14:28.675 "compare_and_write": false, 00:14:28.675 "abort": true, 00:14:28.675 "seek_hole": false, 00:14:28.675 "seek_data": false, 00:14:28.675 "copy": true, 00:14:28.675 "nvme_iov_md": false 00:14:28.675 }, 00:14:28.675 "memory_domains": [ 00:14:28.675 { 00:14:28.675 "dma_device_id": "system", 00:14:28.675 "dma_device_type": 1 00:14:28.675 }, 00:14:28.675 { 00:14:28.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.675 "dma_device_type": 2 00:14:28.675 } 00:14:28.675 ], 00:14:28.675 "driver_specific": {} 00:14:28.675 } 00:14:28.675 ] 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:28.675 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.676 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.676 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.676 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.676 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.676 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.935 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.935 "name": "Existed_Raid", 00:14:28.936 "uuid": "0b4a7f55-5c8a-4647-97c4-b8dbe0bed9c4", 00:14:28.936 "strip_size_kb": 64, 00:14:28.936 "state": "configuring", 00:14:28.936 "raid_level": "concat", 00:14:28.936 "superblock": true, 00:14:28.936 "num_base_bdevs": 4, 00:14:28.936 "num_base_bdevs_discovered": 1, 00:14:28.936 "num_base_bdevs_operational": 4, 00:14:28.936 "base_bdevs_list": [ 00:14:28.936 { 00:14:28.936 "name": "BaseBdev1", 00:14:28.936 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:28.936 "is_configured": true, 00:14:28.936 "data_offset": 2048, 00:14:28.936 "data_size": 63488 00:14:28.936 }, 00:14:28.936 { 00:14:28.936 "name": "BaseBdev2", 00:14:28.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.936 "is_configured": false, 00:14:28.936 "data_offset": 0, 00:14:28.936 "data_size": 0 00:14:28.936 }, 00:14:28.936 { 00:14:28.936 "name": "BaseBdev3", 00:14:28.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.936 "is_configured": false, 00:14:28.936 "data_offset": 0, 00:14:28.936 "data_size": 0 00:14:28.936 }, 00:14:28.936 { 00:14:28.936 "name": "BaseBdev4", 00:14:28.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.936 "is_configured": false, 00:14:28.936 "data_offset": 0, 00:14:28.936 "data_size": 0 00:14:28.936 } 00:14:28.936 ] 00:14:28.936 }' 00:14:28.936 06:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.936 06:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.505 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:29.765 [2024-08-13 06:09:31.325194] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.765 [2024-08-13 06:09:31.325328] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:29.765 [2024-08-13 06:09:31.517196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.765 [2024-08-13 06:09:31.518880] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.765 [2024-08-13 06:09:31.518951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.765 [2024-08-13 06:09:31.518983] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.765 [2024-08-13 06:09:31.519001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.765 [2024-08-13 06:09:31.519020] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:29.765 [2024-08-13 06:09:31.519051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.765 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.025 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.025 "name": "Existed_Raid", 00:14:30.025 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:30.025 "strip_size_kb": 64, 00:14:30.025 "state": "configuring", 00:14:30.025 "raid_level": "concat", 00:14:30.025 "superblock": true, 00:14:30.025 "num_base_bdevs": 4, 00:14:30.025 "num_base_bdevs_discovered": 1, 00:14:30.025 "num_base_bdevs_operational": 4, 00:14:30.025 "base_bdevs_list": [ 00:14:30.025 { 00:14:30.025 "name": "BaseBdev1", 00:14:30.025 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:30.025 "is_configured": true, 00:14:30.025 "data_offset": 2048, 00:14:30.025 "data_size": 63488 00:14:30.025 }, 00:14:30.025 { 00:14:30.025 "name": "BaseBdev2", 00:14:30.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.025 "is_configured": false, 00:14:30.025 "data_offset": 0, 00:14:30.025 "data_size": 0 00:14:30.025 }, 00:14:30.025 { 00:14:30.025 "name": "BaseBdev3", 00:14:30.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.025 "is_configured": false, 00:14:30.025 "data_offset": 0, 00:14:30.025 "data_size": 0 00:14:30.025 }, 00:14:30.025 { 00:14:30.025 "name": "BaseBdev4", 00:14:30.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.025 "is_configured": false, 00:14:30.025 "data_offset": 0, 00:14:30.025 "data_size": 0 00:14:30.025 } 00:14:30.025 ] 00:14:30.025 }' 00:14:30.025 06:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.025 06:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.593 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.853 [2024-08-13 06:09:32.397658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.853 BaseBdev2 00:14:30.853 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:30.853 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:30.854 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:30.854 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:30.854 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:30.854 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:30.854 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:30.854 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.114 [ 00:14:31.114 { 00:14:31.114 "name": "BaseBdev2", 00:14:31.114 "aliases": [ 00:14:31.114 "9607b12a-d42f-4063-a111-3eed45b4b1a2" 00:14:31.114 ], 00:14:31.114 "product_name": "Malloc disk", 00:14:31.114 "block_size": 512, 00:14:31.114 "num_blocks": 65536, 00:14:31.114 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:31.114 "assigned_rate_limits": { 00:14:31.114 "rw_ios_per_sec": 0, 00:14:31.114 "rw_mbytes_per_sec": 0, 00:14:31.114 "r_mbytes_per_sec": 0, 00:14:31.114 "w_mbytes_per_sec": 0 00:14:31.114 }, 00:14:31.114 "claimed": true, 00:14:31.114 "claim_type": "exclusive_write", 00:14:31.114 "zoned": false, 00:14:31.114 "supported_io_types": { 00:14:31.114 "read": true, 00:14:31.114 "write": true, 00:14:31.114 "unmap": true, 00:14:31.114 "flush": true, 00:14:31.114 "reset": true, 00:14:31.114 "nvme_admin": false, 00:14:31.114 "nvme_io": false, 00:14:31.114 "nvme_io_md": false, 00:14:31.114 "write_zeroes": true, 00:14:31.114 "zcopy": true, 00:14:31.114 "get_zone_info": false, 00:14:31.114 "zone_management": false, 00:14:31.114 "zone_append": false, 00:14:31.114 "compare": false, 00:14:31.114 "compare_and_write": false, 00:14:31.114 "abort": true, 00:14:31.114 "seek_hole": false, 00:14:31.114 "seek_data": false, 00:14:31.114 "copy": true, 00:14:31.114 "nvme_iov_md": false 00:14:31.114 }, 00:14:31.114 "memory_domains": [ 00:14:31.114 { 00:14:31.114 "dma_device_id": "system", 00:14:31.114 "dma_device_type": 1 00:14:31.114 }, 00:14:31.114 { 00:14:31.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.114 "dma_device_type": 2 00:14:31.114 } 00:14:31.114 ], 00:14:31.114 "driver_specific": {} 00:14:31.114 } 00:14:31.114 ] 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.114 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.374 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.374 "name": "Existed_Raid", 00:14:31.374 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:31.374 "strip_size_kb": 64, 00:14:31.374 "state": "configuring", 00:14:31.374 "raid_level": "concat", 00:14:31.374 "superblock": true, 00:14:31.374 "num_base_bdevs": 4, 00:14:31.374 "num_base_bdevs_discovered": 2, 00:14:31.374 "num_base_bdevs_operational": 4, 00:14:31.374 "base_bdevs_list": [ 00:14:31.374 { 00:14:31.374 "name": "BaseBdev1", 00:14:31.374 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:31.374 "is_configured": true, 00:14:31.374 "data_offset": 2048, 00:14:31.374 "data_size": 63488 00:14:31.374 }, 00:14:31.374 { 00:14:31.374 "name": "BaseBdev2", 00:14:31.374 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:31.374 "is_configured": true, 00:14:31.374 "data_offset": 2048, 00:14:31.374 "data_size": 63488 00:14:31.374 }, 00:14:31.374 { 00:14:31.374 "name": "BaseBdev3", 00:14:31.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.374 "is_configured": false, 00:14:31.374 "data_offset": 0, 00:14:31.374 "data_size": 0 00:14:31.374 }, 00:14:31.374 { 00:14:31.374 "name": "BaseBdev4", 00:14:31.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.374 "is_configured": false, 00:14:31.374 "data_offset": 0, 00:14:31.374 "data_size": 0 00:14:31.374 } 00:14:31.374 ] 00:14:31.374 }' 00:14:31.374 06:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.374 06:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.944 06:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.944 [2024-08-13 06:09:33.726410] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.944 BaseBdev3 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:32.203 06:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:32.462 [ 00:14:32.462 { 00:14:32.462 "name": "BaseBdev3", 00:14:32.462 "aliases": [ 00:14:32.462 "56a87d30-a9bf-44c1-9b03-4d4f46b5c397" 00:14:32.462 ], 00:14:32.462 "product_name": "Malloc disk", 00:14:32.462 "block_size": 512, 00:14:32.462 "num_blocks": 65536, 00:14:32.462 "uuid": "56a87d30-a9bf-44c1-9b03-4d4f46b5c397", 00:14:32.462 "assigned_rate_limits": { 00:14:32.462 "rw_ios_per_sec": 0, 00:14:32.462 "rw_mbytes_per_sec": 0, 00:14:32.462 "r_mbytes_per_sec": 0, 00:14:32.462 "w_mbytes_per_sec": 0 00:14:32.462 }, 00:14:32.462 "claimed": true, 00:14:32.462 "claim_type": "exclusive_write", 00:14:32.462 "zoned": false, 00:14:32.462 "supported_io_types": { 00:14:32.462 "read": true, 00:14:32.462 "write": true, 00:14:32.462 "unmap": true, 00:14:32.462 "flush": true, 00:14:32.462 "reset": true, 00:14:32.462 "nvme_admin": false, 00:14:32.462 "nvme_io": false, 00:14:32.462 "nvme_io_md": false, 00:14:32.462 "write_zeroes": true, 00:14:32.462 "zcopy": true, 00:14:32.462 "get_zone_info": false, 00:14:32.463 "zone_management": false, 00:14:32.463 "zone_append": false, 00:14:32.463 "compare": false, 00:14:32.463 "compare_and_write": false, 00:14:32.463 "abort": true, 00:14:32.463 "seek_hole": false, 00:14:32.463 "seek_data": false, 00:14:32.463 "copy": true, 00:14:32.463 "nvme_iov_md": false 00:14:32.463 }, 00:14:32.463 "memory_domains": [ 00:14:32.463 { 00:14:32.463 "dma_device_id": "system", 00:14:32.463 "dma_device_type": 1 00:14:32.463 }, 00:14:32.463 { 00:14:32.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.463 "dma_device_type": 2 00:14:32.463 } 00:14:32.463 ], 00:14:32.463 "driver_specific": {} 00:14:32.463 } 00:14:32.463 ] 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.463 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.723 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.723 "name": "Existed_Raid", 00:14:32.723 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:32.723 "strip_size_kb": 64, 00:14:32.723 "state": "configuring", 00:14:32.723 "raid_level": "concat", 00:14:32.723 "superblock": true, 00:14:32.723 "num_base_bdevs": 4, 00:14:32.723 "num_base_bdevs_discovered": 3, 00:14:32.723 "num_base_bdevs_operational": 4, 00:14:32.723 "base_bdevs_list": [ 00:14:32.723 { 00:14:32.723 "name": "BaseBdev1", 00:14:32.723 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:32.723 "is_configured": true, 00:14:32.723 "data_offset": 2048, 00:14:32.723 "data_size": 63488 00:14:32.723 }, 00:14:32.723 { 00:14:32.723 "name": "BaseBdev2", 00:14:32.723 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:32.723 "is_configured": true, 00:14:32.723 "data_offset": 2048, 00:14:32.723 "data_size": 63488 00:14:32.723 }, 00:14:32.723 { 00:14:32.723 "name": "BaseBdev3", 00:14:32.723 "uuid": "56a87d30-a9bf-44c1-9b03-4d4f46b5c397", 00:14:32.723 "is_configured": true, 00:14:32.723 "data_offset": 2048, 00:14:32.723 "data_size": 63488 00:14:32.723 }, 00:14:32.723 { 00:14:32.723 "name": "BaseBdev4", 00:14:32.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.723 "is_configured": false, 00:14:32.723 "data_offset": 0, 00:14:32.723 "data_size": 0 00:14:32.723 } 00:14:32.723 ] 00:14:32.723 }' 00:14:32.723 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.723 06:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.292 06:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:33.292 [2024-08-13 06:09:35.066996] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.292 [2024-08-13 06:09:35.067202] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:33.292 [2024-08-13 06:09:35.067226] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:33.292 BaseBdev4 00:14:33.292 [2024-08-13 06:09:35.067487] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:33.292 [2024-08-13 06:09:35.067632] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:33.292 [2024-08-13 06:09:35.067648] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:33.292 [2024-08-13 06:09:35.067751] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:33.552 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:33.812 [ 00:14:33.812 { 00:14:33.812 "name": "BaseBdev4", 00:14:33.812 "aliases": [ 00:14:33.812 "60968c5a-be3c-4199-af05-473d700df043" 00:14:33.812 ], 00:14:33.812 "product_name": "Malloc disk", 00:14:33.812 "block_size": 512, 00:14:33.812 "num_blocks": 65536, 00:14:33.812 "uuid": "60968c5a-be3c-4199-af05-473d700df043", 00:14:33.812 "assigned_rate_limits": { 00:14:33.812 "rw_ios_per_sec": 0, 00:14:33.812 "rw_mbytes_per_sec": 0, 00:14:33.812 "r_mbytes_per_sec": 0, 00:14:33.812 "w_mbytes_per_sec": 0 00:14:33.812 }, 00:14:33.812 "claimed": true, 00:14:33.812 "claim_type": "exclusive_write", 00:14:33.812 "zoned": false, 00:14:33.812 "supported_io_types": { 00:14:33.812 "read": true, 00:14:33.812 "write": true, 00:14:33.812 "unmap": true, 00:14:33.812 "flush": true, 00:14:33.812 "reset": true, 00:14:33.812 "nvme_admin": false, 00:14:33.812 "nvme_io": false, 00:14:33.812 "nvme_io_md": false, 00:14:33.812 "write_zeroes": true, 00:14:33.812 "zcopy": true, 00:14:33.812 "get_zone_info": false, 00:14:33.812 "zone_management": false, 00:14:33.812 "zone_append": false, 00:14:33.812 "compare": false, 00:14:33.812 "compare_and_write": false, 00:14:33.812 "abort": true, 00:14:33.812 "seek_hole": false, 00:14:33.812 "seek_data": false, 00:14:33.812 "copy": true, 00:14:33.812 "nvme_iov_md": false 00:14:33.812 }, 00:14:33.812 "memory_domains": [ 00:14:33.812 { 00:14:33.812 "dma_device_id": "system", 00:14:33.812 "dma_device_type": 1 00:14:33.812 }, 00:14:33.812 { 00:14:33.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.812 "dma_device_type": 2 00:14:33.812 } 00:14:33.812 ], 00:14:33.812 "driver_specific": {} 00:14:33.812 } 00:14:33.812 ] 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.812 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.072 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.072 "name": "Existed_Raid", 00:14:34.072 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:34.072 "strip_size_kb": 64, 00:14:34.072 "state": "online", 00:14:34.072 "raid_level": "concat", 00:14:34.072 "superblock": true, 00:14:34.072 "num_base_bdevs": 4, 00:14:34.072 "num_base_bdevs_discovered": 4, 00:14:34.072 "num_base_bdevs_operational": 4, 00:14:34.072 "base_bdevs_list": [ 00:14:34.072 { 00:14:34.072 "name": "BaseBdev1", 00:14:34.072 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:34.072 "is_configured": true, 00:14:34.072 "data_offset": 2048, 00:14:34.072 "data_size": 63488 00:14:34.072 }, 00:14:34.072 { 00:14:34.072 "name": "BaseBdev2", 00:14:34.072 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:34.072 "is_configured": true, 00:14:34.072 "data_offset": 2048, 00:14:34.072 "data_size": 63488 00:14:34.072 }, 00:14:34.072 { 00:14:34.072 "name": "BaseBdev3", 00:14:34.072 "uuid": "56a87d30-a9bf-44c1-9b03-4d4f46b5c397", 00:14:34.072 "is_configured": true, 00:14:34.072 "data_offset": 2048, 00:14:34.072 "data_size": 63488 00:14:34.072 }, 00:14:34.072 { 00:14:34.072 "name": "BaseBdev4", 00:14:34.072 "uuid": "60968c5a-be3c-4199-af05-473d700df043", 00:14:34.072 "is_configured": true, 00:14:34.072 "data_offset": 2048, 00:14:34.072 "data_size": 63488 00:14:34.072 } 00:14:34.072 ] 00:14:34.072 }' 00:14:34.072 06:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.072 06:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:34.641 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:34.902 [2024-08-13 06:09:36.445078] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:34.902 "name": "Existed_Raid", 00:14:34.902 "aliases": [ 00:14:34.902 "1c38d8e2-0ec1-4831-8311-901570db3913" 00:14:34.902 ], 00:14:34.902 "product_name": "Raid Volume", 00:14:34.902 "block_size": 512, 00:14:34.902 "num_blocks": 253952, 00:14:34.902 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:34.902 "assigned_rate_limits": { 00:14:34.902 "rw_ios_per_sec": 0, 00:14:34.902 "rw_mbytes_per_sec": 0, 00:14:34.902 "r_mbytes_per_sec": 0, 00:14:34.902 "w_mbytes_per_sec": 0 00:14:34.902 }, 00:14:34.902 "claimed": false, 00:14:34.902 "zoned": false, 00:14:34.902 "supported_io_types": { 00:14:34.902 "read": true, 00:14:34.902 "write": true, 00:14:34.902 "unmap": true, 00:14:34.902 "flush": true, 00:14:34.902 "reset": true, 00:14:34.902 "nvme_admin": false, 00:14:34.902 "nvme_io": false, 00:14:34.902 "nvme_io_md": false, 00:14:34.902 "write_zeroes": true, 00:14:34.902 "zcopy": false, 00:14:34.902 "get_zone_info": false, 00:14:34.902 "zone_management": false, 00:14:34.902 "zone_append": false, 00:14:34.902 "compare": false, 00:14:34.902 "compare_and_write": false, 00:14:34.902 "abort": false, 00:14:34.902 "seek_hole": false, 00:14:34.902 "seek_data": false, 00:14:34.902 "copy": false, 00:14:34.902 "nvme_iov_md": false 00:14:34.902 }, 00:14:34.902 "memory_domains": [ 00:14:34.902 { 00:14:34.902 "dma_device_id": "system", 00:14:34.902 "dma_device_type": 1 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.902 "dma_device_type": 2 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "system", 00:14:34.902 "dma_device_type": 1 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.902 "dma_device_type": 2 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "system", 00:14:34.902 "dma_device_type": 1 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.902 "dma_device_type": 2 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "system", 00:14:34.902 "dma_device_type": 1 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.902 "dma_device_type": 2 00:14:34.902 } 00:14:34.902 ], 00:14:34.902 "driver_specific": { 00:14:34.902 "raid": { 00:14:34.902 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:34.902 "strip_size_kb": 64, 00:14:34.902 "state": "online", 00:14:34.902 "raid_level": "concat", 00:14:34.902 "superblock": true, 00:14:34.902 "num_base_bdevs": 4, 00:14:34.902 "num_base_bdevs_discovered": 4, 00:14:34.902 "num_base_bdevs_operational": 4, 00:14:34.902 "base_bdevs_list": [ 00:14:34.902 { 00:14:34.902 "name": "BaseBdev1", 00:14:34.902 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:34.902 "is_configured": true, 00:14:34.902 "data_offset": 2048, 00:14:34.902 "data_size": 63488 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "name": "BaseBdev2", 00:14:34.902 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:34.902 "is_configured": true, 00:14:34.902 "data_offset": 2048, 00:14:34.902 "data_size": 63488 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "name": "BaseBdev3", 00:14:34.902 "uuid": "56a87d30-a9bf-44c1-9b03-4d4f46b5c397", 00:14:34.902 "is_configured": true, 00:14:34.902 "data_offset": 2048, 00:14:34.902 "data_size": 63488 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "name": "BaseBdev4", 00:14:34.902 "uuid": "60968c5a-be3c-4199-af05-473d700df043", 00:14:34.902 "is_configured": true, 00:14:34.902 "data_offset": 2048, 00:14:34.902 "data_size": 63488 00:14:34.902 } 00:14:34.902 ] 00:14:34.902 } 00:14:34.902 } 00:14:34.902 }' 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:34.902 BaseBdev2 00:14:34.902 BaseBdev3 00:14:34.902 BaseBdev4' 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.902 "name": "BaseBdev1", 00:14:34.902 "aliases": [ 00:14:34.902 "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3" 00:14:34.902 ], 00:14:34.902 "product_name": "Malloc disk", 00:14:34.902 "block_size": 512, 00:14:34.902 "num_blocks": 65536, 00:14:34.902 "uuid": "6697f9f2-c3b5-4c7f-b34e-07770ca9b0d3", 00:14:34.902 "assigned_rate_limits": { 00:14:34.902 "rw_ios_per_sec": 0, 00:14:34.902 "rw_mbytes_per_sec": 0, 00:14:34.902 "r_mbytes_per_sec": 0, 00:14:34.902 "w_mbytes_per_sec": 0 00:14:34.902 }, 00:14:34.902 "claimed": true, 00:14:34.902 "claim_type": "exclusive_write", 00:14:34.902 "zoned": false, 00:14:34.902 "supported_io_types": { 00:14:34.902 "read": true, 00:14:34.902 "write": true, 00:14:34.902 "unmap": true, 00:14:34.902 "flush": true, 00:14:34.902 "reset": true, 00:14:34.902 "nvme_admin": false, 00:14:34.902 "nvme_io": false, 00:14:34.902 "nvme_io_md": false, 00:14:34.902 "write_zeroes": true, 00:14:34.902 "zcopy": true, 00:14:34.902 "get_zone_info": false, 00:14:34.902 "zone_management": false, 00:14:34.902 "zone_append": false, 00:14:34.902 "compare": false, 00:14:34.902 "compare_and_write": false, 00:14:34.902 "abort": true, 00:14:34.902 "seek_hole": false, 00:14:34.902 "seek_data": false, 00:14:34.902 "copy": true, 00:14:34.902 "nvme_iov_md": false 00:14:34.902 }, 00:14:34.902 "memory_domains": [ 00:14:34.902 { 00:14:34.902 "dma_device_id": "system", 00:14:34.902 "dma_device_type": 1 00:14:34.902 }, 00:14:34.902 { 00:14:34.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.902 "dma_device_type": 2 00:14:34.902 } 00:14:34.902 ], 00:14:34.902 "driver_specific": {} 00:14:34.902 }' 00:14:34.902 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.162 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.162 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:35.162 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.162 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.162 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.163 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.163 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.422 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.422 06:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.422 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.422 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.422 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.422 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:35.422 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:35.682 "name": "BaseBdev2", 00:14:35.682 "aliases": [ 00:14:35.682 "9607b12a-d42f-4063-a111-3eed45b4b1a2" 00:14:35.682 ], 00:14:35.682 "product_name": "Malloc disk", 00:14:35.682 "block_size": 512, 00:14:35.682 "num_blocks": 65536, 00:14:35.682 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:35.682 "assigned_rate_limits": { 00:14:35.682 "rw_ios_per_sec": 0, 00:14:35.682 "rw_mbytes_per_sec": 0, 00:14:35.682 "r_mbytes_per_sec": 0, 00:14:35.682 "w_mbytes_per_sec": 0 00:14:35.682 }, 00:14:35.682 "claimed": true, 00:14:35.682 "claim_type": "exclusive_write", 00:14:35.682 "zoned": false, 00:14:35.682 "supported_io_types": { 00:14:35.682 "read": true, 00:14:35.682 "write": true, 00:14:35.682 "unmap": true, 00:14:35.682 "flush": true, 00:14:35.682 "reset": true, 00:14:35.682 "nvme_admin": false, 00:14:35.682 "nvme_io": false, 00:14:35.682 "nvme_io_md": false, 00:14:35.682 "write_zeroes": true, 00:14:35.682 "zcopy": true, 00:14:35.682 "get_zone_info": false, 00:14:35.682 "zone_management": false, 00:14:35.682 "zone_append": false, 00:14:35.682 "compare": false, 00:14:35.682 "compare_and_write": false, 00:14:35.682 "abort": true, 00:14:35.682 "seek_hole": false, 00:14:35.682 "seek_data": false, 00:14:35.682 "copy": true, 00:14:35.682 "nvme_iov_md": false 00:14:35.682 }, 00:14:35.682 "memory_domains": [ 00:14:35.682 { 00:14:35.682 "dma_device_id": "system", 00:14:35.682 "dma_device_type": 1 00:14:35.682 }, 00:14:35.682 { 00:14:35.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.682 "dma_device_type": 2 00:14:35.682 } 00:14:35.682 ], 00:14:35.682 "driver_specific": {} 00:14:35.682 }' 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.682 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:35.942 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:36.202 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.202 "name": "BaseBdev3", 00:14:36.202 "aliases": [ 00:14:36.202 "56a87d30-a9bf-44c1-9b03-4d4f46b5c397" 00:14:36.202 ], 00:14:36.202 "product_name": "Malloc disk", 00:14:36.202 "block_size": 512, 00:14:36.202 "num_blocks": 65536, 00:14:36.202 "uuid": "56a87d30-a9bf-44c1-9b03-4d4f46b5c397", 00:14:36.202 "assigned_rate_limits": { 00:14:36.202 "rw_ios_per_sec": 0, 00:14:36.202 "rw_mbytes_per_sec": 0, 00:14:36.202 "r_mbytes_per_sec": 0, 00:14:36.202 "w_mbytes_per_sec": 0 00:14:36.202 }, 00:14:36.202 "claimed": true, 00:14:36.202 "claim_type": "exclusive_write", 00:14:36.202 "zoned": false, 00:14:36.202 "supported_io_types": { 00:14:36.202 "read": true, 00:14:36.202 "write": true, 00:14:36.202 "unmap": true, 00:14:36.202 "flush": true, 00:14:36.202 "reset": true, 00:14:36.202 "nvme_admin": false, 00:14:36.202 "nvme_io": false, 00:14:36.202 "nvme_io_md": false, 00:14:36.202 "write_zeroes": true, 00:14:36.202 "zcopy": true, 00:14:36.202 "get_zone_info": false, 00:14:36.202 "zone_management": false, 00:14:36.202 "zone_append": false, 00:14:36.202 "compare": false, 00:14:36.202 "compare_and_write": false, 00:14:36.202 "abort": true, 00:14:36.202 "seek_hole": false, 00:14:36.202 "seek_data": false, 00:14:36.202 "copy": true, 00:14:36.202 "nvme_iov_md": false 00:14:36.202 }, 00:14:36.202 "memory_domains": [ 00:14:36.202 { 00:14:36.202 "dma_device_id": "system", 00:14:36.202 "dma_device_type": 1 00:14:36.202 }, 00:14:36.202 { 00:14:36.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.202 "dma_device_type": 2 00:14:36.202 } 00:14:36.202 ], 00:14:36.202 "driver_specific": {} 00:14:36.202 }' 00:14:36.202 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.202 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.202 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.202 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.202 06:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:36.461 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:36.720 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.720 "name": "BaseBdev4", 00:14:36.720 "aliases": [ 00:14:36.720 "60968c5a-be3c-4199-af05-473d700df043" 00:14:36.720 ], 00:14:36.720 "product_name": "Malloc disk", 00:14:36.720 "block_size": 512, 00:14:36.720 "num_blocks": 65536, 00:14:36.721 "uuid": "60968c5a-be3c-4199-af05-473d700df043", 00:14:36.721 "assigned_rate_limits": { 00:14:36.721 "rw_ios_per_sec": 0, 00:14:36.721 "rw_mbytes_per_sec": 0, 00:14:36.721 "r_mbytes_per_sec": 0, 00:14:36.721 "w_mbytes_per_sec": 0 00:14:36.721 }, 00:14:36.721 "claimed": true, 00:14:36.721 "claim_type": "exclusive_write", 00:14:36.721 "zoned": false, 00:14:36.721 "supported_io_types": { 00:14:36.721 "read": true, 00:14:36.721 "write": true, 00:14:36.721 "unmap": true, 00:14:36.721 "flush": true, 00:14:36.721 "reset": true, 00:14:36.721 "nvme_admin": false, 00:14:36.721 "nvme_io": false, 00:14:36.721 "nvme_io_md": false, 00:14:36.721 "write_zeroes": true, 00:14:36.721 "zcopy": true, 00:14:36.721 "get_zone_info": false, 00:14:36.721 "zone_management": false, 00:14:36.721 "zone_append": false, 00:14:36.721 "compare": false, 00:14:36.721 "compare_and_write": false, 00:14:36.721 "abort": true, 00:14:36.721 "seek_hole": false, 00:14:36.721 "seek_data": false, 00:14:36.721 "copy": true, 00:14:36.721 "nvme_iov_md": false 00:14:36.721 }, 00:14:36.721 "memory_domains": [ 00:14:36.721 { 00:14:36.721 "dma_device_id": "system", 00:14:36.721 "dma_device_type": 1 00:14:36.721 }, 00:14:36.721 { 00:14:36.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.721 "dma_device_type": 2 00:14:36.721 } 00:14:36.721 ], 00:14:36.721 "driver_specific": {} 00:14:36.721 }' 00:14:36.721 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.721 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.721 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.721 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:36.980 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:37.240 [2024-08-13 06:09:38.940743] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.240 [2024-08-13 06:09:38.940786] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.240 [2024-08-13 06:09:38.940850] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.240 06:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.511 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.511 "name": "Existed_Raid", 00:14:37.511 "uuid": "1c38d8e2-0ec1-4831-8311-901570db3913", 00:14:37.511 "strip_size_kb": 64, 00:14:37.511 "state": "offline", 00:14:37.511 "raid_level": "concat", 00:14:37.511 "superblock": true, 00:14:37.511 "num_base_bdevs": 4, 00:14:37.511 "num_base_bdevs_discovered": 3, 00:14:37.511 "num_base_bdevs_operational": 3, 00:14:37.511 "base_bdevs_list": [ 00:14:37.511 { 00:14:37.511 "name": null, 00:14:37.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.511 "is_configured": false, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 }, 00:14:37.511 { 00:14:37.511 "name": "BaseBdev2", 00:14:37.511 "uuid": "9607b12a-d42f-4063-a111-3eed45b4b1a2", 00:14:37.511 "is_configured": true, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 }, 00:14:37.511 { 00:14:37.511 "name": "BaseBdev3", 00:14:37.511 "uuid": "56a87d30-a9bf-44c1-9b03-4d4f46b5c397", 00:14:37.511 "is_configured": true, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 }, 00:14:37.511 { 00:14:37.511 "name": "BaseBdev4", 00:14:37.511 "uuid": "60968c5a-be3c-4199-af05-473d700df043", 00:14:37.511 "is_configured": true, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 } 00:14:37.511 ] 00:14:37.511 }' 00:14:37.511 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.511 06:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.137 06:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:38.408 [2024-08-13 06:09:40.046386] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.408 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:38.408 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:38.408 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:38.408 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.667 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:38.667 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.667 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:38.927 [2024-08-13 06:09:40.461069] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.927 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:39.187 [2024-08-13 06:09:40.867628] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:39.187 [2024-08-13 06:09:40.867683] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:39.187 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:39.187 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:39.187 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.187 06:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:39.447 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:39.447 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:39.447 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:39.447 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:39.447 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:39.447 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.706 BaseBdev2 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:39.706 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.966 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.966 [ 00:14:39.966 { 00:14:39.966 "name": "BaseBdev2", 00:14:39.966 "aliases": [ 00:14:39.966 "46335b9e-f636-45af-9f55-f9ad9475be81" 00:14:39.966 ], 00:14:39.966 "product_name": "Malloc disk", 00:14:39.966 "block_size": 512, 00:14:39.966 "num_blocks": 65536, 00:14:39.966 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:39.966 "assigned_rate_limits": { 00:14:39.966 "rw_ios_per_sec": 0, 00:14:39.966 "rw_mbytes_per_sec": 0, 00:14:39.966 "r_mbytes_per_sec": 0, 00:14:39.966 "w_mbytes_per_sec": 0 00:14:39.966 }, 00:14:39.966 "claimed": false, 00:14:39.966 "zoned": false, 00:14:39.966 "supported_io_types": { 00:14:39.966 "read": true, 00:14:39.966 "write": true, 00:14:39.966 "unmap": true, 00:14:39.966 "flush": true, 00:14:39.966 "reset": true, 00:14:39.966 "nvme_admin": false, 00:14:39.966 "nvme_io": false, 00:14:39.966 "nvme_io_md": false, 00:14:39.966 "write_zeroes": true, 00:14:39.966 "zcopy": true, 00:14:39.966 "get_zone_info": false, 00:14:39.966 "zone_management": false, 00:14:39.966 "zone_append": false, 00:14:39.966 "compare": false, 00:14:39.966 "compare_and_write": false, 00:14:39.966 "abort": true, 00:14:39.966 "seek_hole": false, 00:14:39.966 "seek_data": false, 00:14:39.966 "copy": true, 00:14:39.966 "nvme_iov_md": false 00:14:39.966 }, 00:14:39.966 "memory_domains": [ 00:14:39.966 { 00:14:39.966 "dma_device_id": "system", 00:14:39.966 "dma_device_type": 1 00:14:39.966 }, 00:14:39.966 { 00:14:39.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.966 "dma_device_type": 2 00:14:39.966 } 00:14:39.966 ], 00:14:39.966 "driver_specific": {} 00:14:39.966 } 00:14:39.966 ] 00:14:39.966 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:39.966 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:39.966 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:39.966 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.226 BaseBdev3 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:40.226 06:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.485 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.485 [ 00:14:40.485 { 00:14:40.485 "name": "BaseBdev3", 00:14:40.485 "aliases": [ 00:14:40.485 "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90" 00:14:40.485 ], 00:14:40.485 "product_name": "Malloc disk", 00:14:40.485 "block_size": 512, 00:14:40.485 "num_blocks": 65536, 00:14:40.485 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:40.485 "assigned_rate_limits": { 00:14:40.485 "rw_ios_per_sec": 0, 00:14:40.486 "rw_mbytes_per_sec": 0, 00:14:40.486 "r_mbytes_per_sec": 0, 00:14:40.486 "w_mbytes_per_sec": 0 00:14:40.486 }, 00:14:40.486 "claimed": false, 00:14:40.486 "zoned": false, 00:14:40.486 "supported_io_types": { 00:14:40.486 "read": true, 00:14:40.486 "write": true, 00:14:40.486 "unmap": true, 00:14:40.486 "flush": true, 00:14:40.486 "reset": true, 00:14:40.486 "nvme_admin": false, 00:14:40.486 "nvme_io": false, 00:14:40.486 "nvme_io_md": false, 00:14:40.486 "write_zeroes": true, 00:14:40.486 "zcopy": true, 00:14:40.486 "get_zone_info": false, 00:14:40.486 "zone_management": false, 00:14:40.486 "zone_append": false, 00:14:40.486 "compare": false, 00:14:40.486 "compare_and_write": false, 00:14:40.486 "abort": true, 00:14:40.486 "seek_hole": false, 00:14:40.486 "seek_data": false, 00:14:40.486 "copy": true, 00:14:40.486 "nvme_iov_md": false 00:14:40.486 }, 00:14:40.486 "memory_domains": [ 00:14:40.486 { 00:14:40.486 "dma_device_id": "system", 00:14:40.486 "dma_device_type": 1 00:14:40.486 }, 00:14:40.486 { 00:14:40.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.486 "dma_device_type": 2 00:14:40.486 } 00:14:40.486 ], 00:14:40.486 "driver_specific": {} 00:14:40.486 } 00:14:40.486 ] 00:14:40.486 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:40.486 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:40.486 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:40.486 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.745 BaseBdev4 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:40.745 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.005 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:41.264 [ 00:14:41.264 { 00:14:41.264 "name": "BaseBdev4", 00:14:41.264 "aliases": [ 00:14:41.264 "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd" 00:14:41.264 ], 00:14:41.264 "product_name": "Malloc disk", 00:14:41.264 "block_size": 512, 00:14:41.264 "num_blocks": 65536, 00:14:41.264 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:41.264 "assigned_rate_limits": { 00:14:41.264 "rw_ios_per_sec": 0, 00:14:41.264 "rw_mbytes_per_sec": 0, 00:14:41.264 "r_mbytes_per_sec": 0, 00:14:41.264 "w_mbytes_per_sec": 0 00:14:41.264 }, 00:14:41.264 "claimed": false, 00:14:41.264 "zoned": false, 00:14:41.264 "supported_io_types": { 00:14:41.264 "read": true, 00:14:41.264 "write": true, 00:14:41.264 "unmap": true, 00:14:41.264 "flush": true, 00:14:41.264 "reset": true, 00:14:41.264 "nvme_admin": false, 00:14:41.264 "nvme_io": false, 00:14:41.264 "nvme_io_md": false, 00:14:41.264 "write_zeroes": true, 00:14:41.264 "zcopy": true, 00:14:41.264 "get_zone_info": false, 00:14:41.264 "zone_management": false, 00:14:41.264 "zone_append": false, 00:14:41.264 "compare": false, 00:14:41.264 "compare_and_write": false, 00:14:41.264 "abort": true, 00:14:41.264 "seek_hole": false, 00:14:41.264 "seek_data": false, 00:14:41.264 "copy": true, 00:14:41.264 "nvme_iov_md": false 00:14:41.264 }, 00:14:41.264 "memory_domains": [ 00:14:41.264 { 00:14:41.264 "dma_device_id": "system", 00:14:41.264 "dma_device_type": 1 00:14:41.264 }, 00:14:41.264 { 00:14:41.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.264 "dma_device_type": 2 00:14:41.264 } 00:14:41.264 ], 00:14:41.265 "driver_specific": {} 00:14:41.265 } 00:14:41.265 ] 00:14:41.265 06:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:41.265 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:41.265 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:41.265 06:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:41.265 [2024-08-13 06:09:43.017334] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.265 [2024-08-13 06:09:43.017390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.265 [2024-08-13 06:09:43.017410] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.265 [2024-08-13 06:09:43.019074] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.265 [2024-08-13 06:09:43.019126] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.265 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.525 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.525 "name": "Existed_Raid", 00:14:41.525 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:41.525 "strip_size_kb": 64, 00:14:41.525 "state": "configuring", 00:14:41.525 "raid_level": "concat", 00:14:41.525 "superblock": true, 00:14:41.525 "num_base_bdevs": 4, 00:14:41.525 "num_base_bdevs_discovered": 3, 00:14:41.525 "num_base_bdevs_operational": 4, 00:14:41.525 "base_bdevs_list": [ 00:14:41.525 { 00:14:41.525 "name": "BaseBdev1", 00:14:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.525 "is_configured": false, 00:14:41.525 "data_offset": 0, 00:14:41.525 "data_size": 0 00:14:41.525 }, 00:14:41.525 { 00:14:41.525 "name": "BaseBdev2", 00:14:41.525 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:41.525 "is_configured": true, 00:14:41.525 "data_offset": 2048, 00:14:41.525 "data_size": 63488 00:14:41.525 }, 00:14:41.525 { 00:14:41.525 "name": "BaseBdev3", 00:14:41.525 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:41.525 "is_configured": true, 00:14:41.525 "data_offset": 2048, 00:14:41.525 "data_size": 63488 00:14:41.525 }, 00:14:41.525 { 00:14:41.525 "name": "BaseBdev4", 00:14:41.525 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:41.525 "is_configured": true, 00:14:41.525 "data_offset": 2048, 00:14:41.525 "data_size": 63488 00:14:41.525 } 00:14:41.525 ] 00:14:41.525 }' 00:14:41.525 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.525 06:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:42.095 [2024-08-13 06:09:43.863975] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.095 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.355 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.355 06:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.355 06:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.355 "name": "Existed_Raid", 00:14:42.355 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:42.355 "strip_size_kb": 64, 00:14:42.355 "state": "configuring", 00:14:42.355 "raid_level": "concat", 00:14:42.355 "superblock": true, 00:14:42.355 "num_base_bdevs": 4, 00:14:42.355 "num_base_bdevs_discovered": 2, 00:14:42.355 "num_base_bdevs_operational": 4, 00:14:42.355 "base_bdevs_list": [ 00:14:42.355 { 00:14:42.355 "name": "BaseBdev1", 00:14:42.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.355 "is_configured": false, 00:14:42.355 "data_offset": 0, 00:14:42.355 "data_size": 0 00:14:42.355 }, 00:14:42.355 { 00:14:42.355 "name": null, 00:14:42.355 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:42.355 "is_configured": false, 00:14:42.355 "data_offset": 2048, 00:14:42.355 "data_size": 63488 00:14:42.355 }, 00:14:42.355 { 00:14:42.355 "name": "BaseBdev3", 00:14:42.355 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:42.355 "is_configured": true, 00:14:42.355 "data_offset": 2048, 00:14:42.355 "data_size": 63488 00:14:42.355 }, 00:14:42.355 { 00:14:42.355 "name": "BaseBdev4", 00:14:42.355 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:42.355 "is_configured": true, 00:14:42.355 "data_offset": 2048, 00:14:42.355 "data_size": 63488 00:14:42.355 } 00:14:42.355 ] 00:14:42.355 }' 00:14:42.355 06:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.355 06:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.924 06:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.924 06:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:43.183 06:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:43.183 06:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.443 [2024-08-13 06:09:44.985149] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.443 BaseBdev1 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:43.443 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.703 [ 00:14:43.703 { 00:14:43.703 "name": "BaseBdev1", 00:14:43.703 "aliases": [ 00:14:43.703 "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3" 00:14:43.703 ], 00:14:43.703 "product_name": "Malloc disk", 00:14:43.703 "block_size": 512, 00:14:43.703 "num_blocks": 65536, 00:14:43.703 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:43.703 "assigned_rate_limits": { 00:14:43.703 "rw_ios_per_sec": 0, 00:14:43.703 "rw_mbytes_per_sec": 0, 00:14:43.703 "r_mbytes_per_sec": 0, 00:14:43.703 "w_mbytes_per_sec": 0 00:14:43.703 }, 00:14:43.703 "claimed": true, 00:14:43.703 "claim_type": "exclusive_write", 00:14:43.703 "zoned": false, 00:14:43.703 "supported_io_types": { 00:14:43.703 "read": true, 00:14:43.703 "write": true, 00:14:43.703 "unmap": true, 00:14:43.703 "flush": true, 00:14:43.703 "reset": true, 00:14:43.703 "nvme_admin": false, 00:14:43.703 "nvme_io": false, 00:14:43.703 "nvme_io_md": false, 00:14:43.703 "write_zeroes": true, 00:14:43.703 "zcopy": true, 00:14:43.703 "get_zone_info": false, 00:14:43.703 "zone_management": false, 00:14:43.703 "zone_append": false, 00:14:43.703 "compare": false, 00:14:43.703 "compare_and_write": false, 00:14:43.703 "abort": true, 00:14:43.703 "seek_hole": false, 00:14:43.703 "seek_data": false, 00:14:43.703 "copy": true, 00:14:43.703 "nvme_iov_md": false 00:14:43.703 }, 00:14:43.703 "memory_domains": [ 00:14:43.703 { 00:14:43.703 "dma_device_id": "system", 00:14:43.703 "dma_device_type": 1 00:14:43.703 }, 00:14:43.703 { 00:14:43.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.703 "dma_device_type": 2 00:14:43.703 } 00:14:43.703 ], 00:14:43.703 "driver_specific": {} 00:14:43.703 } 00:14:43.703 ] 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:43.703 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:43.704 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.704 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.704 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.704 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.704 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.704 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.963 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.963 "name": "Existed_Raid", 00:14:43.963 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:43.963 "strip_size_kb": 64, 00:14:43.963 "state": "configuring", 00:14:43.963 "raid_level": "concat", 00:14:43.963 "superblock": true, 00:14:43.963 "num_base_bdevs": 4, 00:14:43.963 "num_base_bdevs_discovered": 3, 00:14:43.963 "num_base_bdevs_operational": 4, 00:14:43.963 "base_bdevs_list": [ 00:14:43.963 { 00:14:43.963 "name": "BaseBdev1", 00:14:43.963 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:43.963 "is_configured": true, 00:14:43.963 "data_offset": 2048, 00:14:43.963 "data_size": 63488 00:14:43.963 }, 00:14:43.963 { 00:14:43.964 "name": null, 00:14:43.964 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:43.964 "is_configured": false, 00:14:43.964 "data_offset": 2048, 00:14:43.964 "data_size": 63488 00:14:43.964 }, 00:14:43.964 { 00:14:43.964 "name": "BaseBdev3", 00:14:43.964 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:43.964 "is_configured": true, 00:14:43.964 "data_offset": 2048, 00:14:43.964 "data_size": 63488 00:14:43.964 }, 00:14:43.964 { 00:14:43.964 "name": "BaseBdev4", 00:14:43.964 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:43.964 "is_configured": true, 00:14:43.964 "data_offset": 2048, 00:14:43.964 "data_size": 63488 00:14:43.964 } 00:14:43.964 ] 00:14:43.964 }' 00:14:43.964 06:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.964 06:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.533 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.533 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:44.792 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:44.792 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:45.052 [2024-08-13 06:09:46.594709] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.052 "name": "Existed_Raid", 00:14:45.052 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:45.052 "strip_size_kb": 64, 00:14:45.052 "state": "configuring", 00:14:45.052 "raid_level": "concat", 00:14:45.052 "superblock": true, 00:14:45.052 "num_base_bdevs": 4, 00:14:45.052 "num_base_bdevs_discovered": 2, 00:14:45.052 "num_base_bdevs_operational": 4, 00:14:45.052 "base_bdevs_list": [ 00:14:45.052 { 00:14:45.052 "name": "BaseBdev1", 00:14:45.052 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:45.052 "is_configured": true, 00:14:45.052 "data_offset": 2048, 00:14:45.052 "data_size": 63488 00:14:45.052 }, 00:14:45.052 { 00:14:45.052 "name": null, 00:14:45.052 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:45.052 "is_configured": false, 00:14:45.052 "data_offset": 2048, 00:14:45.052 "data_size": 63488 00:14:45.052 }, 00:14:45.052 { 00:14:45.052 "name": null, 00:14:45.052 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:45.052 "is_configured": false, 00:14:45.052 "data_offset": 2048, 00:14:45.052 "data_size": 63488 00:14:45.052 }, 00:14:45.052 { 00:14:45.052 "name": "BaseBdev4", 00:14:45.052 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:45.052 "is_configured": true, 00:14:45.052 "data_offset": 2048, 00:14:45.052 "data_size": 63488 00:14:45.052 } 00:14:45.052 ] 00:14:45.052 }' 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.052 06:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.622 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.622 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.882 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:45.882 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:46.143 [2024-08-13 06:09:47.725085] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.143 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.403 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.403 "name": "Existed_Raid", 00:14:46.403 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:46.403 "strip_size_kb": 64, 00:14:46.403 "state": "configuring", 00:14:46.403 "raid_level": "concat", 00:14:46.403 "superblock": true, 00:14:46.403 "num_base_bdevs": 4, 00:14:46.403 "num_base_bdevs_discovered": 3, 00:14:46.403 "num_base_bdevs_operational": 4, 00:14:46.403 "base_bdevs_list": [ 00:14:46.403 { 00:14:46.403 "name": "BaseBdev1", 00:14:46.403 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 }, 00:14:46.403 { 00:14:46.403 "name": null, 00:14:46.403 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:46.403 "is_configured": false, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 }, 00:14:46.403 { 00:14:46.403 "name": "BaseBdev3", 00:14:46.403 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 }, 00:14:46.403 { 00:14:46.403 "name": "BaseBdev4", 00:14:46.403 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 } 00:14:46.403 ] 00:14:46.403 }' 00:14:46.403 06:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.403 06:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.973 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.973 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.973 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:46.973 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:47.233 [2024-08-13 06:09:48.895135] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.233 06:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.493 06:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.493 "name": "Existed_Raid", 00:14:47.493 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:47.493 "strip_size_kb": 64, 00:14:47.493 "state": "configuring", 00:14:47.493 "raid_level": "concat", 00:14:47.493 "superblock": true, 00:14:47.493 "num_base_bdevs": 4, 00:14:47.493 "num_base_bdevs_discovered": 2, 00:14:47.493 "num_base_bdevs_operational": 4, 00:14:47.493 "base_bdevs_list": [ 00:14:47.493 { 00:14:47.493 "name": null, 00:14:47.493 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:47.493 "is_configured": false, 00:14:47.493 "data_offset": 2048, 00:14:47.493 "data_size": 63488 00:14:47.493 }, 00:14:47.493 { 00:14:47.493 "name": null, 00:14:47.493 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:47.493 "is_configured": false, 00:14:47.493 "data_offset": 2048, 00:14:47.493 "data_size": 63488 00:14:47.493 }, 00:14:47.493 { 00:14:47.493 "name": "BaseBdev3", 00:14:47.493 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:47.493 "is_configured": true, 00:14:47.493 "data_offset": 2048, 00:14:47.493 "data_size": 63488 00:14:47.493 }, 00:14:47.493 { 00:14:47.493 "name": "BaseBdev4", 00:14:47.493 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:47.493 "is_configured": true, 00:14:47.493 "data_offset": 2048, 00:14:47.493 "data_size": 63488 00:14:47.493 } 00:14:47.493 ] 00:14:47.493 }' 00:14:47.493 06:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.493 06:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.064 06:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.064 06:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.428 06:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:48.428 06:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:48.428 [2024-08-13 06:09:50.056160] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.428 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.688 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.688 "name": "Existed_Raid", 00:14:48.688 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:48.688 "strip_size_kb": 64, 00:14:48.688 "state": "configuring", 00:14:48.688 "raid_level": "concat", 00:14:48.688 "superblock": true, 00:14:48.688 "num_base_bdevs": 4, 00:14:48.688 "num_base_bdevs_discovered": 3, 00:14:48.688 "num_base_bdevs_operational": 4, 00:14:48.688 "base_bdevs_list": [ 00:14:48.688 { 00:14:48.688 "name": null, 00:14:48.688 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:48.688 "is_configured": false, 00:14:48.688 "data_offset": 2048, 00:14:48.688 "data_size": 63488 00:14:48.688 }, 00:14:48.688 { 00:14:48.688 "name": "BaseBdev2", 00:14:48.688 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:48.688 "is_configured": true, 00:14:48.688 "data_offset": 2048, 00:14:48.688 "data_size": 63488 00:14:48.688 }, 00:14:48.688 { 00:14:48.688 "name": "BaseBdev3", 00:14:48.688 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:48.688 "is_configured": true, 00:14:48.688 "data_offset": 2048, 00:14:48.688 "data_size": 63488 00:14:48.688 }, 00:14:48.688 { 00:14:48.688 "name": "BaseBdev4", 00:14:48.688 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:48.688 "is_configured": true, 00:14:48.688 "data_offset": 2048, 00:14:48.688 "data_size": 63488 00:14:48.688 } 00:14:48.688 ] 00:14:48.688 }' 00:14:48.688 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.688 06:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.257 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.257 06:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.257 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:49.257 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:49.257 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.517 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 61b7943d-6cec-4d48-afc2-1c07fe7a7cd3 00:14:49.777 [2024-08-13 06:09:51.400936] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.777 [2024-08-13 06:09:51.401103] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:49.777 [2024-08-13 06:09:51.401120] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:49.777 [2024-08-13 06:09:51.401352] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:49.777 [2024-08-13 06:09:51.401516] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:49.777 [2024-08-13 06:09:51.401531] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:49.777 [2024-08-13 06:09:51.401670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.777 NewBaseBdev 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:49.777 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.037 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:50.297 [ 00:14:50.297 { 00:14:50.297 "name": "NewBaseBdev", 00:14:50.297 "aliases": [ 00:14:50.297 "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3" 00:14:50.297 ], 00:14:50.297 "product_name": "Malloc disk", 00:14:50.297 "block_size": 512, 00:14:50.297 "num_blocks": 65536, 00:14:50.297 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:50.297 "assigned_rate_limits": { 00:14:50.297 "rw_ios_per_sec": 0, 00:14:50.297 "rw_mbytes_per_sec": 0, 00:14:50.297 "r_mbytes_per_sec": 0, 00:14:50.297 "w_mbytes_per_sec": 0 00:14:50.297 }, 00:14:50.297 "claimed": true, 00:14:50.297 "claim_type": "exclusive_write", 00:14:50.297 "zoned": false, 00:14:50.297 "supported_io_types": { 00:14:50.297 "read": true, 00:14:50.297 "write": true, 00:14:50.297 "unmap": true, 00:14:50.297 "flush": true, 00:14:50.297 "reset": true, 00:14:50.297 "nvme_admin": false, 00:14:50.297 "nvme_io": false, 00:14:50.297 "nvme_io_md": false, 00:14:50.297 "write_zeroes": true, 00:14:50.297 "zcopy": true, 00:14:50.297 "get_zone_info": false, 00:14:50.297 "zone_management": false, 00:14:50.297 "zone_append": false, 00:14:50.297 "compare": false, 00:14:50.297 "compare_and_write": false, 00:14:50.297 "abort": true, 00:14:50.297 "seek_hole": false, 00:14:50.297 "seek_data": false, 00:14:50.297 "copy": true, 00:14:50.297 "nvme_iov_md": false 00:14:50.297 }, 00:14:50.297 "memory_domains": [ 00:14:50.297 { 00:14:50.297 "dma_device_id": "system", 00:14:50.297 "dma_device_type": 1 00:14:50.297 }, 00:14:50.297 { 00:14:50.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.297 "dma_device_type": 2 00:14:50.297 } 00:14:50.297 ], 00:14:50.297 "driver_specific": {} 00:14:50.297 } 00:14:50.297 ] 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.297 06:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.297 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.297 "name": "Existed_Raid", 00:14:50.297 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:50.297 "strip_size_kb": 64, 00:14:50.297 "state": "online", 00:14:50.297 "raid_level": "concat", 00:14:50.297 "superblock": true, 00:14:50.297 "num_base_bdevs": 4, 00:14:50.297 "num_base_bdevs_discovered": 4, 00:14:50.297 "num_base_bdevs_operational": 4, 00:14:50.297 "base_bdevs_list": [ 00:14:50.297 { 00:14:50.297 "name": "NewBaseBdev", 00:14:50.297 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:50.297 "is_configured": true, 00:14:50.297 "data_offset": 2048, 00:14:50.297 "data_size": 63488 00:14:50.297 }, 00:14:50.297 { 00:14:50.297 "name": "BaseBdev2", 00:14:50.297 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:50.297 "is_configured": true, 00:14:50.297 "data_offset": 2048, 00:14:50.297 "data_size": 63488 00:14:50.297 }, 00:14:50.297 { 00:14:50.297 "name": "BaseBdev3", 00:14:50.297 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:50.297 "is_configured": true, 00:14:50.297 "data_offset": 2048, 00:14:50.297 "data_size": 63488 00:14:50.297 }, 00:14:50.297 { 00:14:50.297 "name": "BaseBdev4", 00:14:50.297 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:50.297 "is_configured": true, 00:14:50.297 "data_offset": 2048, 00:14:50.298 "data_size": 63488 00:14:50.298 } 00:14:50.298 ] 00:14:50.298 }' 00:14:50.298 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.298 06:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:50.866 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:51.125 [2024-08-13 06:09:52.790913] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.125 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:51.125 "name": "Existed_Raid", 00:14:51.125 "aliases": [ 00:14:51.125 "cdaf860e-08da-4e78-8811-0bfe7552b61b" 00:14:51.125 ], 00:14:51.125 "product_name": "Raid Volume", 00:14:51.125 "block_size": 512, 00:14:51.125 "num_blocks": 253952, 00:14:51.125 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:51.125 "assigned_rate_limits": { 00:14:51.125 "rw_ios_per_sec": 0, 00:14:51.125 "rw_mbytes_per_sec": 0, 00:14:51.125 "r_mbytes_per_sec": 0, 00:14:51.125 "w_mbytes_per_sec": 0 00:14:51.125 }, 00:14:51.125 "claimed": false, 00:14:51.125 "zoned": false, 00:14:51.125 "supported_io_types": { 00:14:51.125 "read": true, 00:14:51.125 "write": true, 00:14:51.125 "unmap": true, 00:14:51.125 "flush": true, 00:14:51.125 "reset": true, 00:14:51.125 "nvme_admin": false, 00:14:51.125 "nvme_io": false, 00:14:51.125 "nvme_io_md": false, 00:14:51.125 "write_zeroes": true, 00:14:51.125 "zcopy": false, 00:14:51.125 "get_zone_info": false, 00:14:51.125 "zone_management": false, 00:14:51.125 "zone_append": false, 00:14:51.125 "compare": false, 00:14:51.125 "compare_and_write": false, 00:14:51.125 "abort": false, 00:14:51.125 "seek_hole": false, 00:14:51.125 "seek_data": false, 00:14:51.125 "copy": false, 00:14:51.125 "nvme_iov_md": false 00:14:51.125 }, 00:14:51.125 "memory_domains": [ 00:14:51.125 { 00:14:51.125 "dma_device_id": "system", 00:14:51.125 "dma_device_type": 1 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.125 "dma_device_type": 2 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "system", 00:14:51.125 "dma_device_type": 1 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.125 "dma_device_type": 2 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "system", 00:14:51.125 "dma_device_type": 1 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.125 "dma_device_type": 2 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "system", 00:14:51.125 "dma_device_type": 1 00:14:51.125 }, 00:14:51.125 { 00:14:51.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.125 "dma_device_type": 2 00:14:51.125 } 00:14:51.125 ], 00:14:51.125 "driver_specific": { 00:14:51.125 "raid": { 00:14:51.125 "uuid": "cdaf860e-08da-4e78-8811-0bfe7552b61b", 00:14:51.125 "strip_size_kb": 64, 00:14:51.125 "state": "online", 00:14:51.125 "raid_level": "concat", 00:14:51.125 "superblock": true, 00:14:51.125 "num_base_bdevs": 4, 00:14:51.125 "num_base_bdevs_discovered": 4, 00:14:51.125 "num_base_bdevs_operational": 4, 00:14:51.125 "base_bdevs_list": [ 00:14:51.125 { 00:14:51.125 "name": "NewBaseBdev", 00:14:51.125 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:51.125 "is_configured": true, 00:14:51.126 "data_offset": 2048, 00:14:51.126 "data_size": 63488 00:14:51.126 }, 00:14:51.126 { 00:14:51.126 "name": "BaseBdev2", 00:14:51.126 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:51.126 "is_configured": true, 00:14:51.126 "data_offset": 2048, 00:14:51.126 "data_size": 63488 00:14:51.126 }, 00:14:51.126 { 00:14:51.126 "name": "BaseBdev3", 00:14:51.126 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:51.126 "is_configured": true, 00:14:51.126 "data_offset": 2048, 00:14:51.126 "data_size": 63488 00:14:51.126 }, 00:14:51.126 { 00:14:51.126 "name": "BaseBdev4", 00:14:51.126 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:51.126 "is_configured": true, 00:14:51.126 "data_offset": 2048, 00:14:51.126 "data_size": 63488 00:14:51.126 } 00:14:51.126 ] 00:14:51.126 } 00:14:51.126 } 00:14:51.126 }' 00:14:51.126 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.126 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:51.126 BaseBdev2 00:14:51.126 BaseBdev3 00:14:51.126 BaseBdev4' 00:14:51.126 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.126 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:51.126 06:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.385 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.385 "name": "NewBaseBdev", 00:14:51.385 "aliases": [ 00:14:51.385 "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3" 00:14:51.385 ], 00:14:51.385 "product_name": "Malloc disk", 00:14:51.385 "block_size": 512, 00:14:51.385 "num_blocks": 65536, 00:14:51.385 "uuid": "61b7943d-6cec-4d48-afc2-1c07fe7a7cd3", 00:14:51.385 "assigned_rate_limits": { 00:14:51.385 "rw_ios_per_sec": 0, 00:14:51.385 "rw_mbytes_per_sec": 0, 00:14:51.385 "r_mbytes_per_sec": 0, 00:14:51.385 "w_mbytes_per_sec": 0 00:14:51.385 }, 00:14:51.385 "claimed": true, 00:14:51.385 "claim_type": "exclusive_write", 00:14:51.385 "zoned": false, 00:14:51.385 "supported_io_types": { 00:14:51.385 "read": true, 00:14:51.385 "write": true, 00:14:51.385 "unmap": true, 00:14:51.385 "flush": true, 00:14:51.385 "reset": true, 00:14:51.385 "nvme_admin": false, 00:14:51.385 "nvme_io": false, 00:14:51.385 "nvme_io_md": false, 00:14:51.385 "write_zeroes": true, 00:14:51.385 "zcopy": true, 00:14:51.385 "get_zone_info": false, 00:14:51.385 "zone_management": false, 00:14:51.385 "zone_append": false, 00:14:51.385 "compare": false, 00:14:51.385 "compare_and_write": false, 00:14:51.385 "abort": true, 00:14:51.385 "seek_hole": false, 00:14:51.385 "seek_data": false, 00:14:51.385 "copy": true, 00:14:51.385 "nvme_iov_md": false 00:14:51.386 }, 00:14:51.386 "memory_domains": [ 00:14:51.386 { 00:14:51.386 "dma_device_id": "system", 00:14:51.386 "dma_device_type": 1 00:14:51.386 }, 00:14:51.386 { 00:14:51.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.386 "dma_device_type": 2 00:14:51.386 } 00:14:51.386 ], 00:14:51.386 "driver_specific": {} 00:14:51.386 }' 00:14:51.386 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.386 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.386 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.386 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.645 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.646 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:51.905 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.905 "name": "BaseBdev2", 00:14:51.905 "aliases": [ 00:14:51.905 "46335b9e-f636-45af-9f55-f9ad9475be81" 00:14:51.905 ], 00:14:51.905 "product_name": "Malloc disk", 00:14:51.905 "block_size": 512, 00:14:51.905 "num_blocks": 65536, 00:14:51.905 "uuid": "46335b9e-f636-45af-9f55-f9ad9475be81", 00:14:51.905 "assigned_rate_limits": { 00:14:51.905 "rw_ios_per_sec": 0, 00:14:51.905 "rw_mbytes_per_sec": 0, 00:14:51.905 "r_mbytes_per_sec": 0, 00:14:51.905 "w_mbytes_per_sec": 0 00:14:51.905 }, 00:14:51.905 "claimed": true, 00:14:51.905 "claim_type": "exclusive_write", 00:14:51.905 "zoned": false, 00:14:51.905 "supported_io_types": { 00:14:51.905 "read": true, 00:14:51.905 "write": true, 00:14:51.905 "unmap": true, 00:14:51.905 "flush": true, 00:14:51.905 "reset": true, 00:14:51.905 "nvme_admin": false, 00:14:51.905 "nvme_io": false, 00:14:51.905 "nvme_io_md": false, 00:14:51.905 "write_zeroes": true, 00:14:51.905 "zcopy": true, 00:14:51.905 "get_zone_info": false, 00:14:51.905 "zone_management": false, 00:14:51.905 "zone_append": false, 00:14:51.905 "compare": false, 00:14:51.905 "compare_and_write": false, 00:14:51.905 "abort": true, 00:14:51.905 "seek_hole": false, 00:14:51.905 "seek_data": false, 00:14:51.905 "copy": true, 00:14:51.905 "nvme_iov_md": false 00:14:51.905 }, 00:14:51.905 "memory_domains": [ 00:14:51.905 { 00:14:51.905 "dma_device_id": "system", 00:14:51.905 "dma_device_type": 1 00:14:51.905 }, 00:14:51.905 { 00:14:51.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.905 "dma_device_type": 2 00:14:51.905 } 00:14:51.905 ], 00:14:51.905 "driver_specific": {} 00:14:51.905 }' 00:14:51.905 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.905 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.164 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.423 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:52.424 06:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:52.424 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:52.424 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:52.424 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:52.424 "name": "BaseBdev3", 00:14:52.424 "aliases": [ 00:14:52.424 "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90" 00:14:52.424 ], 00:14:52.424 "product_name": "Malloc disk", 00:14:52.424 "block_size": 512, 00:14:52.424 "num_blocks": 65536, 00:14:52.424 "uuid": "4a3b48d7-2b75-4e6b-a4f3-051ab5801b90", 00:14:52.424 "assigned_rate_limits": { 00:14:52.424 "rw_ios_per_sec": 0, 00:14:52.424 "rw_mbytes_per_sec": 0, 00:14:52.424 "r_mbytes_per_sec": 0, 00:14:52.424 "w_mbytes_per_sec": 0 00:14:52.424 }, 00:14:52.424 "claimed": true, 00:14:52.424 "claim_type": "exclusive_write", 00:14:52.424 "zoned": false, 00:14:52.424 "supported_io_types": { 00:14:52.424 "read": true, 00:14:52.424 "write": true, 00:14:52.424 "unmap": true, 00:14:52.424 "flush": true, 00:14:52.424 "reset": true, 00:14:52.424 "nvme_admin": false, 00:14:52.424 "nvme_io": false, 00:14:52.424 "nvme_io_md": false, 00:14:52.424 "write_zeroes": true, 00:14:52.424 "zcopy": true, 00:14:52.424 "get_zone_info": false, 00:14:52.424 "zone_management": false, 00:14:52.424 "zone_append": false, 00:14:52.424 "compare": false, 00:14:52.424 "compare_and_write": false, 00:14:52.424 "abort": true, 00:14:52.424 "seek_hole": false, 00:14:52.424 "seek_data": false, 00:14:52.424 "copy": true, 00:14:52.424 "nvme_iov_md": false 00:14:52.424 }, 00:14:52.424 "memory_domains": [ 00:14:52.424 { 00:14:52.424 "dma_device_id": "system", 00:14:52.424 "dma_device_type": 1 00:14:52.424 }, 00:14:52.424 { 00:14:52.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.424 "dma_device_type": 2 00:14:52.424 } 00:14:52.424 ], 00:14:52.424 "driver_specific": {} 00:14:52.424 }' 00:14:52.424 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.683 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:52.943 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.943 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.943 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:52.943 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:52.943 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:52.943 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.203 "name": "BaseBdev4", 00:14:53.203 "aliases": [ 00:14:53.203 "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd" 00:14:53.203 ], 00:14:53.203 "product_name": "Malloc disk", 00:14:53.203 "block_size": 512, 00:14:53.203 "num_blocks": 65536, 00:14:53.203 "uuid": "a7dfcfe7-47fe-4cab-83fa-6c428ada19fd", 00:14:53.203 "assigned_rate_limits": { 00:14:53.203 "rw_ios_per_sec": 0, 00:14:53.203 "rw_mbytes_per_sec": 0, 00:14:53.203 "r_mbytes_per_sec": 0, 00:14:53.203 "w_mbytes_per_sec": 0 00:14:53.203 }, 00:14:53.203 "claimed": true, 00:14:53.203 "claim_type": "exclusive_write", 00:14:53.203 "zoned": false, 00:14:53.203 "supported_io_types": { 00:14:53.203 "read": true, 00:14:53.203 "write": true, 00:14:53.203 "unmap": true, 00:14:53.203 "flush": true, 00:14:53.203 "reset": true, 00:14:53.203 "nvme_admin": false, 00:14:53.203 "nvme_io": false, 00:14:53.203 "nvme_io_md": false, 00:14:53.203 "write_zeroes": true, 00:14:53.203 "zcopy": true, 00:14:53.203 "get_zone_info": false, 00:14:53.203 "zone_management": false, 00:14:53.203 "zone_append": false, 00:14:53.203 "compare": false, 00:14:53.203 "compare_and_write": false, 00:14:53.203 "abort": true, 00:14:53.203 "seek_hole": false, 00:14:53.203 "seek_data": false, 00:14:53.203 "copy": true, 00:14:53.203 "nvme_iov_md": false 00:14:53.203 }, 00:14:53.203 "memory_domains": [ 00:14:53.203 { 00:14:53.203 "dma_device_id": "system", 00:14:53.203 "dma_device_type": 1 00:14:53.203 }, 00:14:53.203 { 00:14:53.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.203 "dma_device_type": 2 00:14:53.203 } 00:14:53.203 ], 00:14:53.203 "driver_specific": {} 00:14:53.203 }' 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:53.203 06:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.463 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.463 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.463 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.463 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.463 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.463 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:53.723 [2024-08-13 06:09:55.346558] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.723 [2024-08-13 06:09:55.346592] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.723 [2024-08-13 06:09:55.346677] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.723 [2024-08-13 06:09:55.346741] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.723 [2024-08-13 06:09:55.346753] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 87049 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 87049 ']' 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 87049 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87049 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87049' 00:14:53.723 killing process with pid 87049 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 87049 00:14:53.723 [2024-08-13 06:09:55.408043] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.723 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 87049 00:14:53.723 [2024-08-13 06:09:55.448056] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.984 06:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:53.984 00:14:53.984 real 0m28.078s 00:14:53.984 user 0m51.702s 00:14:53.984 sys 0m4.678s 00:14:53.984 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.984 ************************************ 00:14:53.984 END TEST raid_state_function_test_sb 00:14:53.984 ************************************ 00:14:53.984 06:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.984 06:09:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:53.984 06:09:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:53.984 06:09:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:53.984 06:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.984 ************************************ 00:14:53.984 START TEST raid_superblock_test 00:14:53.984 ************************************ 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:53.984 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=88052 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 88052 /var/tmp/spdk-raid.sock 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 88052 ']' 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:54.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.244 06:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.244 [2024-08-13 06:09:55.862107] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:14:54.245 [2024-08-13 06:09:55.862225] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88052 ] 00:14:54.245 [2024-08-13 06:09:56.006703] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.505 [2024-08-13 06:09:56.075889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.505 [2024-08-13 06:09:56.151664] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.505 [2024-08-13 06:09:56.151705] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.074 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.075 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:55.335 malloc1 00:14:55.335 06:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:55.335 [2024-08-13 06:09:57.102218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:55.335 [2024-08-13 06:09:57.102408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.335 [2024-08-13 06:09:57.102457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:55.335 [2024-08-13 06:09:57.102492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.335 [2024-08-13 06:09:57.105087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.335 [2024-08-13 06:09:57.105198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:55.335 pt1 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.335 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:55.595 malloc2 00:14:55.595 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.855 [2024-08-13 06:09:57.532732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.855 [2024-08-13 06:09:57.532917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.855 [2024-08-13 06:09:57.532966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.856 [2024-08-13 06:09:57.532999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.856 [2024-08-13 06:09:57.535620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.856 [2024-08-13 06:09:57.535712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.856 pt2 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.856 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:56.116 malloc3 00:14:56.116 06:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.376 [2024-08-13 06:09:57.984146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.376 [2024-08-13 06:09:57.984332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.376 [2024-08-13 06:09:57.984385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.376 [2024-08-13 06:09:57.984423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.376 [2024-08-13 06:09:57.986976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.376 [2024-08-13 06:09:57.987101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.376 pt3 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.376 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:56.637 malloc4 00:14:56.637 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:56.637 [2024-08-13 06:09:58.394348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:56.637 [2024-08-13 06:09:58.394480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.637 [2024-08-13 06:09:58.394525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.637 [2024-08-13 06:09:58.394560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.637 [2024-08-13 06:09:58.396931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.637 [2024-08-13 06:09:58.397039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:56.637 pt4 00:14:56.637 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:56.637 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:56.637 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:56.897 [2024-08-13 06:09:58.582220] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.897 [2024-08-13 06:09:58.584334] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.897 [2024-08-13 06:09:58.584455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.897 [2024-08-13 06:09:58.584537] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:56.897 [2024-08-13 06:09:58.584755] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:56.897 [2024-08-13 06:09:58.584809] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:56.897 [2024-08-13 06:09:58.585176] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:56.897 [2024-08-13 06:09:58.585375] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:56.897 [2024-08-13 06:09:58.585437] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:56.897 [2024-08-13 06:09:58.585621] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.897 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.157 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.157 "name": "raid_bdev1", 00:14:57.157 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:14:57.157 "strip_size_kb": 64, 00:14:57.157 "state": "online", 00:14:57.157 "raid_level": "concat", 00:14:57.157 "superblock": true, 00:14:57.157 "num_base_bdevs": 4, 00:14:57.157 "num_base_bdevs_discovered": 4, 00:14:57.157 "num_base_bdevs_operational": 4, 00:14:57.157 "base_bdevs_list": [ 00:14:57.157 { 00:14:57.157 "name": "pt1", 00:14:57.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.157 "is_configured": true, 00:14:57.157 "data_offset": 2048, 00:14:57.157 "data_size": 63488 00:14:57.157 }, 00:14:57.157 { 00:14:57.157 "name": "pt2", 00:14:57.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.157 "is_configured": true, 00:14:57.157 "data_offset": 2048, 00:14:57.157 "data_size": 63488 00:14:57.157 }, 00:14:57.157 { 00:14:57.157 "name": "pt3", 00:14:57.157 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.157 "is_configured": true, 00:14:57.157 "data_offset": 2048, 00:14:57.157 "data_size": 63488 00:14:57.157 }, 00:14:57.157 { 00:14:57.157 "name": "pt4", 00:14:57.157 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.158 "is_configured": true, 00:14:57.158 "data_offset": 2048, 00:14:57.158 "data_size": 63488 00:14:57.158 } 00:14:57.158 ] 00:14:57.158 }' 00:14:57.158 06:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.158 06:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:57.727 [2024-08-13 06:09:59.496990] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.727 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:57.727 "name": "raid_bdev1", 00:14:57.727 "aliases": [ 00:14:57.727 "5d857abf-b45a-4650-aed9-7fc449ec3115" 00:14:57.727 ], 00:14:57.727 "product_name": "Raid Volume", 00:14:57.727 "block_size": 512, 00:14:57.727 "num_blocks": 253952, 00:14:57.727 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:14:57.727 "assigned_rate_limits": { 00:14:57.727 "rw_ios_per_sec": 0, 00:14:57.727 "rw_mbytes_per_sec": 0, 00:14:57.727 "r_mbytes_per_sec": 0, 00:14:57.727 "w_mbytes_per_sec": 0 00:14:57.727 }, 00:14:57.727 "claimed": false, 00:14:57.727 "zoned": false, 00:14:57.727 "supported_io_types": { 00:14:57.727 "read": true, 00:14:57.727 "write": true, 00:14:57.727 "unmap": true, 00:14:57.727 "flush": true, 00:14:57.727 "reset": true, 00:14:57.727 "nvme_admin": false, 00:14:57.727 "nvme_io": false, 00:14:57.727 "nvme_io_md": false, 00:14:57.727 "write_zeroes": true, 00:14:57.727 "zcopy": false, 00:14:57.727 "get_zone_info": false, 00:14:57.727 "zone_management": false, 00:14:57.727 "zone_append": false, 00:14:57.728 "compare": false, 00:14:57.728 "compare_and_write": false, 00:14:57.728 "abort": false, 00:14:57.728 "seek_hole": false, 00:14:57.728 "seek_data": false, 00:14:57.728 "copy": false, 00:14:57.728 "nvme_iov_md": false 00:14:57.728 }, 00:14:57.728 "memory_domains": [ 00:14:57.728 { 00:14:57.728 "dma_device_id": "system", 00:14:57.728 "dma_device_type": 1 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.728 "dma_device_type": 2 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "system", 00:14:57.728 "dma_device_type": 1 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.728 "dma_device_type": 2 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "system", 00:14:57.728 "dma_device_type": 1 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.728 "dma_device_type": 2 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "system", 00:14:57.728 "dma_device_type": 1 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.728 "dma_device_type": 2 00:14:57.728 } 00:14:57.728 ], 00:14:57.728 "driver_specific": { 00:14:57.728 "raid": { 00:14:57.728 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:14:57.728 "strip_size_kb": 64, 00:14:57.728 "state": "online", 00:14:57.728 "raid_level": "concat", 00:14:57.728 "superblock": true, 00:14:57.728 "num_base_bdevs": 4, 00:14:57.728 "num_base_bdevs_discovered": 4, 00:14:57.728 "num_base_bdevs_operational": 4, 00:14:57.728 "base_bdevs_list": [ 00:14:57.728 { 00:14:57.728 "name": "pt1", 00:14:57.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.728 "is_configured": true, 00:14:57.728 "data_offset": 2048, 00:14:57.728 "data_size": 63488 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "name": "pt2", 00:14:57.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.728 "is_configured": true, 00:14:57.728 "data_offset": 2048, 00:14:57.728 "data_size": 63488 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "name": "pt3", 00:14:57.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.728 "is_configured": true, 00:14:57.728 "data_offset": 2048, 00:14:57.728 "data_size": 63488 00:14:57.728 }, 00:14:57.728 { 00:14:57.728 "name": "pt4", 00:14:57.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.728 "is_configured": true, 00:14:57.728 "data_offset": 2048, 00:14:57.728 "data_size": 63488 00:14:57.728 } 00:14:57.728 ] 00:14:57.728 } 00:14:57.728 } 00:14:57.728 }' 00:14:57.728 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.988 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:57.988 pt2 00:14:57.988 pt3 00:14:57.988 pt4' 00:14:57.988 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.988 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:57.988 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.988 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.988 "name": "pt1", 00:14:57.988 "aliases": [ 00:14:57.988 "00000000-0000-0000-0000-000000000001" 00:14:57.988 ], 00:14:57.988 "product_name": "passthru", 00:14:57.988 "block_size": 512, 00:14:57.988 "num_blocks": 65536, 00:14:57.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.988 "assigned_rate_limits": { 00:14:57.988 "rw_ios_per_sec": 0, 00:14:57.988 "rw_mbytes_per_sec": 0, 00:14:57.988 "r_mbytes_per_sec": 0, 00:14:57.988 "w_mbytes_per_sec": 0 00:14:57.988 }, 00:14:57.988 "claimed": true, 00:14:57.988 "claim_type": "exclusive_write", 00:14:57.988 "zoned": false, 00:14:57.988 "supported_io_types": { 00:14:57.988 "read": true, 00:14:57.988 "write": true, 00:14:57.988 "unmap": true, 00:14:57.988 "flush": true, 00:14:57.988 "reset": true, 00:14:57.988 "nvme_admin": false, 00:14:57.988 "nvme_io": false, 00:14:57.988 "nvme_io_md": false, 00:14:57.988 "write_zeroes": true, 00:14:57.988 "zcopy": true, 00:14:57.988 "get_zone_info": false, 00:14:57.988 "zone_management": false, 00:14:57.988 "zone_append": false, 00:14:57.988 "compare": false, 00:14:57.988 "compare_and_write": false, 00:14:57.988 "abort": true, 00:14:57.988 "seek_hole": false, 00:14:57.988 "seek_data": false, 00:14:57.988 "copy": true, 00:14:57.988 "nvme_iov_md": false 00:14:57.988 }, 00:14:57.988 "memory_domains": [ 00:14:57.988 { 00:14:57.988 "dma_device_id": "system", 00:14:57.988 "dma_device_type": 1 00:14:57.988 }, 00:14:57.988 { 00:14:57.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.988 "dma_device_type": 2 00:14:57.988 } 00:14:57.988 ], 00:14:57.988 "driver_specific": { 00:14:57.988 "passthru": { 00:14:57.988 "name": "pt1", 00:14:57.988 "base_bdev_name": "malloc1" 00:14:57.988 } 00:14:57.988 } 00:14:57.988 }' 00:14:57.988 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.248 06:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.248 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.248 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.508 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.508 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.508 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:58.508 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:58.508 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.508 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.508 "name": "pt2", 00:14:58.508 "aliases": [ 00:14:58.508 "00000000-0000-0000-0000-000000000002" 00:14:58.508 ], 00:14:58.508 "product_name": "passthru", 00:14:58.508 "block_size": 512, 00:14:58.508 "num_blocks": 65536, 00:14:58.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.508 "assigned_rate_limits": { 00:14:58.508 "rw_ios_per_sec": 0, 00:14:58.508 "rw_mbytes_per_sec": 0, 00:14:58.508 "r_mbytes_per_sec": 0, 00:14:58.508 "w_mbytes_per_sec": 0 00:14:58.508 }, 00:14:58.508 "claimed": true, 00:14:58.508 "claim_type": "exclusive_write", 00:14:58.508 "zoned": false, 00:14:58.508 "supported_io_types": { 00:14:58.508 "read": true, 00:14:58.508 "write": true, 00:14:58.508 "unmap": true, 00:14:58.508 "flush": true, 00:14:58.508 "reset": true, 00:14:58.508 "nvme_admin": false, 00:14:58.508 "nvme_io": false, 00:14:58.508 "nvme_io_md": false, 00:14:58.508 "write_zeroes": true, 00:14:58.508 "zcopy": true, 00:14:58.508 "get_zone_info": false, 00:14:58.508 "zone_management": false, 00:14:58.508 "zone_append": false, 00:14:58.508 "compare": false, 00:14:58.508 "compare_and_write": false, 00:14:58.509 "abort": true, 00:14:58.509 "seek_hole": false, 00:14:58.509 "seek_data": false, 00:14:58.509 "copy": true, 00:14:58.509 "nvme_iov_md": false 00:14:58.509 }, 00:14:58.509 "memory_domains": [ 00:14:58.509 { 00:14:58.509 "dma_device_id": "system", 00:14:58.509 "dma_device_type": 1 00:14:58.509 }, 00:14:58.509 { 00:14:58.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.509 "dma_device_type": 2 00:14:58.509 } 00:14:58.509 ], 00:14:58.509 "driver_specific": { 00:14:58.509 "passthru": { 00:14:58.509 "name": "pt2", 00:14:58.509 "base_bdev_name": "malloc2" 00:14:58.509 } 00:14:58.509 } 00:14:58.509 }' 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.768 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:59.028 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:59.028 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:59.028 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:59.028 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:59.028 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:59.288 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:59.288 "name": "pt3", 00:14:59.288 "aliases": [ 00:14:59.288 "00000000-0000-0000-0000-000000000003" 00:14:59.288 ], 00:14:59.288 "product_name": "passthru", 00:14:59.288 "block_size": 512, 00:14:59.288 "num_blocks": 65536, 00:14:59.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.288 "assigned_rate_limits": { 00:14:59.288 "rw_ios_per_sec": 0, 00:14:59.288 "rw_mbytes_per_sec": 0, 00:14:59.288 "r_mbytes_per_sec": 0, 00:14:59.288 "w_mbytes_per_sec": 0 00:14:59.288 }, 00:14:59.288 "claimed": true, 00:14:59.288 "claim_type": "exclusive_write", 00:14:59.288 "zoned": false, 00:14:59.288 "supported_io_types": { 00:14:59.288 "read": true, 00:14:59.288 "write": true, 00:14:59.288 "unmap": true, 00:14:59.288 "flush": true, 00:14:59.288 "reset": true, 00:14:59.288 "nvme_admin": false, 00:14:59.288 "nvme_io": false, 00:14:59.288 "nvme_io_md": false, 00:14:59.288 "write_zeroes": true, 00:14:59.288 "zcopy": true, 00:14:59.288 "get_zone_info": false, 00:14:59.288 "zone_management": false, 00:14:59.288 "zone_append": false, 00:14:59.288 "compare": false, 00:14:59.288 "compare_and_write": false, 00:14:59.288 "abort": true, 00:14:59.288 "seek_hole": false, 00:14:59.288 "seek_data": false, 00:14:59.288 "copy": true, 00:14:59.288 "nvme_iov_md": false 00:14:59.288 }, 00:14:59.288 "memory_domains": [ 00:14:59.288 { 00:14:59.288 "dma_device_id": "system", 00:14:59.288 "dma_device_type": 1 00:14:59.288 }, 00:14:59.288 { 00:14:59.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.288 "dma_device_type": 2 00:14:59.288 } 00:14:59.288 ], 00:14:59.288 "driver_specific": { 00:14:59.288 "passthru": { 00:14:59.288 "name": "pt3", 00:14:59.288 "base_bdev_name": "malloc3" 00:14:59.288 } 00:14:59.288 } 00:14:59.288 }' 00:14:59.288 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:59.288 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:59.288 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:59.288 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:59.288 06:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:59.288 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:59.288 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:59.288 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:59.548 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:59.808 "name": "pt4", 00:14:59.808 "aliases": [ 00:14:59.808 "00000000-0000-0000-0000-000000000004" 00:14:59.808 ], 00:14:59.808 "product_name": "passthru", 00:14:59.808 "block_size": 512, 00:14:59.808 "num_blocks": 65536, 00:14:59.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.808 "assigned_rate_limits": { 00:14:59.808 "rw_ios_per_sec": 0, 00:14:59.808 "rw_mbytes_per_sec": 0, 00:14:59.808 "r_mbytes_per_sec": 0, 00:14:59.808 "w_mbytes_per_sec": 0 00:14:59.808 }, 00:14:59.808 "claimed": true, 00:14:59.808 "claim_type": "exclusive_write", 00:14:59.808 "zoned": false, 00:14:59.808 "supported_io_types": { 00:14:59.808 "read": true, 00:14:59.808 "write": true, 00:14:59.808 "unmap": true, 00:14:59.808 "flush": true, 00:14:59.808 "reset": true, 00:14:59.808 "nvme_admin": false, 00:14:59.808 "nvme_io": false, 00:14:59.808 "nvme_io_md": false, 00:14:59.808 "write_zeroes": true, 00:14:59.808 "zcopy": true, 00:14:59.808 "get_zone_info": false, 00:14:59.808 "zone_management": false, 00:14:59.808 "zone_append": false, 00:14:59.808 "compare": false, 00:14:59.808 "compare_and_write": false, 00:14:59.808 "abort": true, 00:14:59.808 "seek_hole": false, 00:14:59.808 "seek_data": false, 00:14:59.808 "copy": true, 00:14:59.808 "nvme_iov_md": false 00:14:59.808 }, 00:14:59.808 "memory_domains": [ 00:14:59.808 { 00:14:59.808 "dma_device_id": "system", 00:14:59.808 "dma_device_type": 1 00:14:59.808 }, 00:14:59.808 { 00:14:59.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.808 "dma_device_type": 2 00:14:59.808 } 00:14:59.808 ], 00:14:59.808 "driver_specific": { 00:14:59.808 "passthru": { 00:14:59.808 "name": "pt4", 00:14:59.808 "base_bdev_name": "malloc4" 00:14:59.808 } 00:14:59.808 } 00:14:59.808 }' 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:59.808 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.067 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.067 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.067 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.067 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.067 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:00.067 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:00.327 [2024-08-13 06:10:01.905098] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.327 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5d857abf-b45a-4650-aed9-7fc449ec3115 00:15:00.327 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5d857abf-b45a-4650-aed9-7fc449ec3115 ']' 00:15:00.327 06:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:00.327 [2024-08-13 06:10:02.108455] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.327 [2024-08-13 06:10:02.108489] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.327 [2024-08-13 06:10:02.108598] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.327 [2024-08-13 06:10:02.108684] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.327 [2024-08-13 06:10:02.108717] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:00.587 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.587 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:00.587 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:00.587 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:00.587 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.587 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:00.846 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.846 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:01.108 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.108 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:01.108 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.108 06:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:01.380 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:01.380 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:01.673 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:01.674 [2024-08-13 06:10:03.442232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:01.674 [2024-08-13 06:10:03.444041] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:01.674 [2024-08-13 06:10:03.444089] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:01.674 [2024-08-13 06:10:03.444119] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:01.674 [2024-08-13 06:10:03.444165] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:01.674 [2024-08-13 06:10:03.444230] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:01.674 [2024-08-13 06:10:03.444249] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:01.674 [2024-08-13 06:10:03.444265] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:01.674 [2024-08-13 06:10:03.444277] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.674 [2024-08-13 06:10:03.444288] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:01.674 request: 00:15:01.674 { 00:15:01.674 "name": "raid_bdev1", 00:15:01.674 "raid_level": "concat", 00:15:01.674 "base_bdevs": [ 00:15:01.674 "malloc1", 00:15:01.674 "malloc2", 00:15:01.674 "malloc3", 00:15:01.674 "malloc4" 00:15:01.674 ], 00:15:01.674 "strip_size_kb": 64, 00:15:01.674 "superblock": false, 00:15:01.674 "method": "bdev_raid_create", 00:15:01.674 "req_id": 1 00:15:01.674 } 00:15:01.674 Got JSON-RPC error response 00:15:01.674 response: 00:15:01.674 { 00:15:01.674 "code": -17, 00:15:01.674 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:01.674 } 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:01.674 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.934 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:01.934 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:01.934 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.193 [2024-08-13 06:10:03.849495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.193 [2024-08-13 06:10:03.849566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.193 [2024-08-13 06:10:03.849586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:02.193 [2024-08-13 06:10:03.849599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.193 [2024-08-13 06:10:03.851719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.193 [2024-08-13 06:10:03.851760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.193 [2024-08-13 06:10:03.851843] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:02.193 [2024-08-13 06:10:03.851887] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.193 pt1 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.193 06:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.451 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.451 "name": "raid_bdev1", 00:15:02.451 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:15:02.451 "strip_size_kb": 64, 00:15:02.451 "state": "configuring", 00:15:02.451 "raid_level": "concat", 00:15:02.451 "superblock": true, 00:15:02.451 "num_base_bdevs": 4, 00:15:02.451 "num_base_bdevs_discovered": 1, 00:15:02.451 "num_base_bdevs_operational": 4, 00:15:02.451 "base_bdevs_list": [ 00:15:02.451 { 00:15:02.451 "name": "pt1", 00:15:02.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.451 "is_configured": true, 00:15:02.451 "data_offset": 2048, 00:15:02.451 "data_size": 63488 00:15:02.451 }, 00:15:02.451 { 00:15:02.451 "name": null, 00:15:02.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.451 "is_configured": false, 00:15:02.451 "data_offset": 2048, 00:15:02.451 "data_size": 63488 00:15:02.451 }, 00:15:02.451 { 00:15:02.451 "name": null, 00:15:02.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.452 "is_configured": false, 00:15:02.452 "data_offset": 2048, 00:15:02.452 "data_size": 63488 00:15:02.452 }, 00:15:02.452 { 00:15:02.452 "name": null, 00:15:02.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.452 "is_configured": false, 00:15:02.452 "data_offset": 2048, 00:15:02.452 "data_size": 63488 00:15:02.452 } 00:15:02.452 ] 00:15:02.452 }' 00:15:02.452 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.452 06:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.020 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:15:03.020 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.020 [2024-08-13 06:10:04.751895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.020 [2024-08-13 06:10:04.751988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.020 [2024-08-13 06:10:04.752020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.020 [2024-08-13 06:10:04.752062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.020 [2024-08-13 06:10:04.752437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.020 [2024-08-13 06:10:04.752494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.020 [2024-08-13 06:10:04.752587] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.020 [2024-08-13 06:10:04.752633] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.020 pt2 00:15:03.020 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:03.280 [2024-08-13 06:10:04.959616] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.280 06:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.540 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.540 "name": "raid_bdev1", 00:15:03.540 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:15:03.540 "strip_size_kb": 64, 00:15:03.540 "state": "configuring", 00:15:03.540 "raid_level": "concat", 00:15:03.540 "superblock": true, 00:15:03.540 "num_base_bdevs": 4, 00:15:03.540 "num_base_bdevs_discovered": 1, 00:15:03.540 "num_base_bdevs_operational": 4, 00:15:03.540 "base_bdevs_list": [ 00:15:03.540 { 00:15:03.540 "name": "pt1", 00:15:03.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.540 "is_configured": true, 00:15:03.540 "data_offset": 2048, 00:15:03.540 "data_size": 63488 00:15:03.540 }, 00:15:03.540 { 00:15:03.540 "name": null, 00:15:03.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.540 "is_configured": false, 00:15:03.540 "data_offset": 2048, 00:15:03.540 "data_size": 63488 00:15:03.540 }, 00:15:03.540 { 00:15:03.540 "name": null, 00:15:03.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.540 "is_configured": false, 00:15:03.540 "data_offset": 2048, 00:15:03.540 "data_size": 63488 00:15:03.540 }, 00:15:03.540 { 00:15:03.540 "name": null, 00:15:03.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.540 "is_configured": false, 00:15:03.540 "data_offset": 2048, 00:15:03.540 "data_size": 63488 00:15:03.540 } 00:15:03.540 ] 00:15:03.540 }' 00:15:03.540 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.540 06:10:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.108 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:04.108 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:04.108 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.367 [2024-08-13 06:10:05.941917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.367 [2024-08-13 06:10:05.941987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.367 [2024-08-13 06:10:05.942009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:04.367 [2024-08-13 06:10:05.942018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.367 [2024-08-13 06:10:05.942438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.367 [2024-08-13 06:10:05.942458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.367 [2024-08-13 06:10:05.942530] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:04.367 [2024-08-13 06:10:05.942558] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.367 pt2 00:15:04.367 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:04.367 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:04.367 06:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:04.367 [2024-08-13 06:10:06.133658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:04.367 [2024-08-13 06:10:06.133711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.367 [2024-08-13 06:10:06.133731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:04.367 [2024-08-13 06:10:06.133747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.367 [2024-08-13 06:10:06.134088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.367 [2024-08-13 06:10:06.134105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:04.367 [2024-08-13 06:10:06.134171] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:04.367 [2024-08-13 06:10:06.134189] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.367 pt3 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:04.627 [2024-08-13 06:10:06.345516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:04.627 [2024-08-13 06:10:06.345606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.627 [2024-08-13 06:10:06.345642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:04.627 [2024-08-13 06:10:06.345670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.627 [2024-08-13 06:10:06.346063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.627 [2024-08-13 06:10:06.346117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:04.627 [2024-08-13 06:10:06.346204] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:04.627 [2024-08-13 06:10:06.346250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:04.627 [2024-08-13 06:10:06.346373] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:04.627 [2024-08-13 06:10:06.346408] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:04.627 [2024-08-13 06:10:06.346633] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:04.627 [2024-08-13 06:10:06.346774] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:04.627 [2024-08-13 06:10:06.346812] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:04.627 [2024-08-13 06:10:06.346929] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.627 pt4 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.627 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.887 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.887 "name": "raid_bdev1", 00:15:04.887 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:15:04.887 "strip_size_kb": 64, 00:15:04.887 "state": "online", 00:15:04.887 "raid_level": "concat", 00:15:04.887 "superblock": true, 00:15:04.887 "num_base_bdevs": 4, 00:15:04.887 "num_base_bdevs_discovered": 4, 00:15:04.887 "num_base_bdevs_operational": 4, 00:15:04.887 "base_bdevs_list": [ 00:15:04.887 { 00:15:04.887 "name": "pt1", 00:15:04.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.887 "is_configured": true, 00:15:04.887 "data_offset": 2048, 00:15:04.887 "data_size": 63488 00:15:04.887 }, 00:15:04.887 { 00:15:04.887 "name": "pt2", 00:15:04.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.887 "is_configured": true, 00:15:04.887 "data_offset": 2048, 00:15:04.887 "data_size": 63488 00:15:04.887 }, 00:15:04.887 { 00:15:04.887 "name": "pt3", 00:15:04.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.887 "is_configured": true, 00:15:04.887 "data_offset": 2048, 00:15:04.887 "data_size": 63488 00:15:04.887 }, 00:15:04.887 { 00:15:04.887 "name": "pt4", 00:15:04.887 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.887 "is_configured": true, 00:15:04.887 "data_offset": 2048, 00:15:04.887 "data_size": 63488 00:15:04.887 } 00:15:04.887 ] 00:15:04.887 }' 00:15:04.887 06:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.887 06:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:05.455 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:05.715 [2024-08-13 06:10:07.296264] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.715 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:05.715 "name": "raid_bdev1", 00:15:05.715 "aliases": [ 00:15:05.715 "5d857abf-b45a-4650-aed9-7fc449ec3115" 00:15:05.715 ], 00:15:05.715 "product_name": "Raid Volume", 00:15:05.715 "block_size": 512, 00:15:05.715 "num_blocks": 253952, 00:15:05.715 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:15:05.715 "assigned_rate_limits": { 00:15:05.715 "rw_ios_per_sec": 0, 00:15:05.715 "rw_mbytes_per_sec": 0, 00:15:05.715 "r_mbytes_per_sec": 0, 00:15:05.715 "w_mbytes_per_sec": 0 00:15:05.715 }, 00:15:05.715 "claimed": false, 00:15:05.715 "zoned": false, 00:15:05.715 "supported_io_types": { 00:15:05.715 "read": true, 00:15:05.715 "write": true, 00:15:05.715 "unmap": true, 00:15:05.715 "flush": true, 00:15:05.715 "reset": true, 00:15:05.715 "nvme_admin": false, 00:15:05.715 "nvme_io": false, 00:15:05.715 "nvme_io_md": false, 00:15:05.715 "write_zeroes": true, 00:15:05.715 "zcopy": false, 00:15:05.715 "get_zone_info": false, 00:15:05.715 "zone_management": false, 00:15:05.715 "zone_append": false, 00:15:05.715 "compare": false, 00:15:05.715 "compare_and_write": false, 00:15:05.715 "abort": false, 00:15:05.715 "seek_hole": false, 00:15:05.715 "seek_data": false, 00:15:05.715 "copy": false, 00:15:05.715 "nvme_iov_md": false 00:15:05.715 }, 00:15:05.715 "memory_domains": [ 00:15:05.715 { 00:15:05.715 "dma_device_id": "system", 00:15:05.715 "dma_device_type": 1 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.715 "dma_device_type": 2 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "system", 00:15:05.715 "dma_device_type": 1 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.715 "dma_device_type": 2 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "system", 00:15:05.715 "dma_device_type": 1 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.715 "dma_device_type": 2 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "system", 00:15:05.715 "dma_device_type": 1 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.715 "dma_device_type": 2 00:15:05.715 } 00:15:05.715 ], 00:15:05.715 "driver_specific": { 00:15:05.715 "raid": { 00:15:05.715 "uuid": "5d857abf-b45a-4650-aed9-7fc449ec3115", 00:15:05.715 "strip_size_kb": 64, 00:15:05.715 "state": "online", 00:15:05.715 "raid_level": "concat", 00:15:05.715 "superblock": true, 00:15:05.715 "num_base_bdevs": 4, 00:15:05.715 "num_base_bdevs_discovered": 4, 00:15:05.715 "num_base_bdevs_operational": 4, 00:15:05.715 "base_bdevs_list": [ 00:15:05.715 { 00:15:05.715 "name": "pt1", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "name": "pt2", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "name": "pt3", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "name": "pt4", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 } 00:15:05.715 ] 00:15:05.715 } 00:15:05.715 } 00:15:05.715 }' 00:15:05.715 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.715 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:05.715 pt2 00:15:05.715 pt3 00:15:05.715 pt4' 00:15:05.715 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:05.715 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:05.715 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.974 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.974 "name": "pt1", 00:15:05.974 "aliases": [ 00:15:05.974 "00000000-0000-0000-0000-000000000001" 00:15:05.974 ], 00:15:05.974 "product_name": "passthru", 00:15:05.974 "block_size": 512, 00:15:05.974 "num_blocks": 65536, 00:15:05.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.974 "assigned_rate_limits": { 00:15:05.974 "rw_ios_per_sec": 0, 00:15:05.974 "rw_mbytes_per_sec": 0, 00:15:05.974 "r_mbytes_per_sec": 0, 00:15:05.974 "w_mbytes_per_sec": 0 00:15:05.974 }, 00:15:05.974 "claimed": true, 00:15:05.974 "claim_type": "exclusive_write", 00:15:05.974 "zoned": false, 00:15:05.974 "supported_io_types": { 00:15:05.974 "read": true, 00:15:05.974 "write": true, 00:15:05.974 "unmap": true, 00:15:05.974 "flush": true, 00:15:05.974 "reset": true, 00:15:05.974 "nvme_admin": false, 00:15:05.974 "nvme_io": false, 00:15:05.974 "nvme_io_md": false, 00:15:05.974 "write_zeroes": true, 00:15:05.974 "zcopy": true, 00:15:05.974 "get_zone_info": false, 00:15:05.974 "zone_management": false, 00:15:05.974 "zone_append": false, 00:15:05.974 "compare": false, 00:15:05.974 "compare_and_write": false, 00:15:05.974 "abort": true, 00:15:05.974 "seek_hole": false, 00:15:05.974 "seek_data": false, 00:15:05.974 "copy": true, 00:15:05.974 "nvme_iov_md": false 00:15:05.974 }, 00:15:05.974 "memory_domains": [ 00:15:05.974 { 00:15:05.974 "dma_device_id": "system", 00:15:05.974 "dma_device_type": 1 00:15:05.974 }, 00:15:05.974 { 00:15:05.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.974 "dma_device_type": 2 00:15:05.974 } 00:15:05.974 ], 00:15:05.974 "driver_specific": { 00:15:05.974 "passthru": { 00:15:05.974 "name": "pt1", 00:15:05.974 "base_bdev_name": "malloc1" 00:15:05.974 } 00:15:05.974 } 00:15:05.974 }' 00:15:05.974 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.974 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.974 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.974 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.974 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:06.233 06:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.492 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.492 "name": "pt2", 00:15:06.492 "aliases": [ 00:15:06.492 "00000000-0000-0000-0000-000000000002" 00:15:06.492 ], 00:15:06.492 "product_name": "passthru", 00:15:06.492 "block_size": 512, 00:15:06.492 "num_blocks": 65536, 00:15:06.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.492 "assigned_rate_limits": { 00:15:06.492 "rw_ios_per_sec": 0, 00:15:06.492 "rw_mbytes_per_sec": 0, 00:15:06.492 "r_mbytes_per_sec": 0, 00:15:06.492 "w_mbytes_per_sec": 0 00:15:06.492 }, 00:15:06.492 "claimed": true, 00:15:06.493 "claim_type": "exclusive_write", 00:15:06.493 "zoned": false, 00:15:06.493 "supported_io_types": { 00:15:06.493 "read": true, 00:15:06.493 "write": true, 00:15:06.493 "unmap": true, 00:15:06.493 "flush": true, 00:15:06.493 "reset": true, 00:15:06.493 "nvme_admin": false, 00:15:06.493 "nvme_io": false, 00:15:06.493 "nvme_io_md": false, 00:15:06.493 "write_zeroes": true, 00:15:06.493 "zcopy": true, 00:15:06.493 "get_zone_info": false, 00:15:06.493 "zone_management": false, 00:15:06.493 "zone_append": false, 00:15:06.493 "compare": false, 00:15:06.493 "compare_and_write": false, 00:15:06.493 "abort": true, 00:15:06.493 "seek_hole": false, 00:15:06.493 "seek_data": false, 00:15:06.493 "copy": true, 00:15:06.493 "nvme_iov_md": false 00:15:06.493 }, 00:15:06.493 "memory_domains": [ 00:15:06.493 { 00:15:06.493 "dma_device_id": "system", 00:15:06.493 "dma_device_type": 1 00:15:06.493 }, 00:15:06.493 { 00:15:06.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.493 "dma_device_type": 2 00:15:06.493 } 00:15:06.493 ], 00:15:06.493 "driver_specific": { 00:15:06.493 "passthru": { 00:15:06.493 "name": "pt2", 00:15:06.493 "base_bdev_name": "malloc2" 00:15:06.493 } 00:15:06.493 } 00:15:06.493 }' 00:15:06.493 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.493 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.493 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.493 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.752 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.011 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.011 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.011 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:07.011 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.011 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.011 "name": "pt3", 00:15:07.011 "aliases": [ 00:15:07.011 "00000000-0000-0000-0000-000000000003" 00:15:07.011 ], 00:15:07.011 "product_name": "passthru", 00:15:07.011 "block_size": 512, 00:15:07.011 "num_blocks": 65536, 00:15:07.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.011 "assigned_rate_limits": { 00:15:07.011 "rw_ios_per_sec": 0, 00:15:07.011 "rw_mbytes_per_sec": 0, 00:15:07.011 "r_mbytes_per_sec": 0, 00:15:07.011 "w_mbytes_per_sec": 0 00:15:07.011 }, 00:15:07.011 "claimed": true, 00:15:07.011 "claim_type": "exclusive_write", 00:15:07.011 "zoned": false, 00:15:07.011 "supported_io_types": { 00:15:07.011 "read": true, 00:15:07.011 "write": true, 00:15:07.011 "unmap": true, 00:15:07.011 "flush": true, 00:15:07.011 "reset": true, 00:15:07.011 "nvme_admin": false, 00:15:07.011 "nvme_io": false, 00:15:07.011 "nvme_io_md": false, 00:15:07.011 "write_zeroes": true, 00:15:07.011 "zcopy": true, 00:15:07.011 "get_zone_info": false, 00:15:07.011 "zone_management": false, 00:15:07.011 "zone_append": false, 00:15:07.011 "compare": false, 00:15:07.011 "compare_and_write": false, 00:15:07.011 "abort": true, 00:15:07.011 "seek_hole": false, 00:15:07.011 "seek_data": false, 00:15:07.011 "copy": true, 00:15:07.011 "nvme_iov_md": false 00:15:07.011 }, 00:15:07.011 "memory_domains": [ 00:15:07.011 { 00:15:07.011 "dma_device_id": "system", 00:15:07.011 "dma_device_type": 1 00:15:07.011 }, 00:15:07.011 { 00:15:07.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.011 "dma_device_type": 2 00:15:07.011 } 00:15:07.011 ], 00:15:07.011 "driver_specific": { 00:15:07.011 "passthru": { 00:15:07.011 "name": "pt3", 00:15:07.011 "base_bdev_name": "malloc3" 00:15:07.011 } 00:15:07.011 } 00:15:07.011 }' 00:15:07.011 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.270 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.270 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.270 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.270 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.270 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.270 06:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.270 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.270 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.529 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.529 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.529 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.529 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.529 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:07.529 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.789 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.789 "name": "pt4", 00:15:07.789 "aliases": [ 00:15:07.789 "00000000-0000-0000-0000-000000000004" 00:15:07.789 ], 00:15:07.789 "product_name": "passthru", 00:15:07.789 "block_size": 512, 00:15:07.789 "num_blocks": 65536, 00:15:07.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.789 "assigned_rate_limits": { 00:15:07.789 "rw_ios_per_sec": 0, 00:15:07.789 "rw_mbytes_per_sec": 0, 00:15:07.789 "r_mbytes_per_sec": 0, 00:15:07.789 "w_mbytes_per_sec": 0 00:15:07.789 }, 00:15:07.789 "claimed": true, 00:15:07.789 "claim_type": "exclusive_write", 00:15:07.789 "zoned": false, 00:15:07.789 "supported_io_types": { 00:15:07.789 "read": true, 00:15:07.789 "write": true, 00:15:07.789 "unmap": true, 00:15:07.789 "flush": true, 00:15:07.789 "reset": true, 00:15:07.790 "nvme_admin": false, 00:15:07.790 "nvme_io": false, 00:15:07.790 "nvme_io_md": false, 00:15:07.790 "write_zeroes": true, 00:15:07.790 "zcopy": true, 00:15:07.790 "get_zone_info": false, 00:15:07.790 "zone_management": false, 00:15:07.790 "zone_append": false, 00:15:07.790 "compare": false, 00:15:07.790 "compare_and_write": false, 00:15:07.790 "abort": true, 00:15:07.790 "seek_hole": false, 00:15:07.790 "seek_data": false, 00:15:07.790 "copy": true, 00:15:07.790 "nvme_iov_md": false 00:15:07.790 }, 00:15:07.790 "memory_domains": [ 00:15:07.790 { 00:15:07.790 "dma_device_id": "system", 00:15:07.790 "dma_device_type": 1 00:15:07.790 }, 00:15:07.790 { 00:15:07.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.790 "dma_device_type": 2 00:15:07.790 } 00:15:07.790 ], 00:15:07.790 "driver_specific": { 00:15:07.790 "passthru": { 00:15:07.790 "name": "pt4", 00:15:07.790 "base_bdev_name": "malloc4" 00:15:07.790 } 00:15:07.790 } 00:15:07.790 }' 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.790 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:08.050 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:08.310 [2024-08-13 06:10:09.908247] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5d857abf-b45a-4650-aed9-7fc449ec3115 '!=' 5d857abf-b45a-4650-aed9-7fc449ec3115 ']' 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 88052 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 88052 ']' 00:15:08.310 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 88052 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88052 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88052' 00:15:08.311 killing process with pid 88052 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 88052 00:15:08.311 [2024-08-13 06:10:09.969484] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.311 06:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 88052 00:15:08.311 [2024-08-13 06:10:09.969675] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.311 [2024-08-13 06:10:09.969796] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.311 [2024-08-13 06:10:09.969831] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:08.311 [2024-08-13 06:10:10.051103] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.881 06:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:08.882 00:15:08.882 real 0m14.646s 00:15:08.882 user 0m26.153s 00:15:08.882 sys 0m2.497s 00:15:08.882 06:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:08.882 06:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.882 ************************************ 00:15:08.882 END TEST raid_superblock_test 00:15:08.882 ************************************ 00:15:08.882 06:10:10 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:08.882 06:10:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:08.882 06:10:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:08.882 06:10:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.882 ************************************ 00:15:08.882 START TEST raid_read_error_test 00:15:08.882 ************************************ 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 read 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.bbSc44MTbv 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=88551 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 88551 /var/tmp/spdk-raid.sock 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 88551 ']' 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:08.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:08.882 06:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.882 [2024-08-13 06:10:10.617243] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:15:08.882 [2024-08-13 06:10:10.617499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88551 ] 00:15:09.142 [2024-08-13 06:10:10.764829] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.142 [2024-08-13 06:10:10.813217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.142 [2024-08-13 06:10:10.856754] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.142 [2024-08-13 06:10:10.856889] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.712 06:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:09.712 06:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:09.712 06:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:09.712 06:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:09.972 BaseBdev1_malloc 00:15:09.972 06:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:10.232 true 00:15:10.232 06:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:10.232 [2024-08-13 06:10:11.937355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:10.232 [2024-08-13 06:10:11.937511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.232 [2024-08-13 06:10:11.937537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:10.232 [2024-08-13 06:10:11.937550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.232 [2024-08-13 06:10:11.939649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.232 [2024-08-13 06:10:11.939693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.232 BaseBdev1 00:15:10.232 06:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:10.232 06:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.492 BaseBdev2_malloc 00:15:10.492 06:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:10.751 true 00:15:10.751 06:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:10.751 [2024-08-13 06:10:12.533252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:10.751 [2024-08-13 06:10:12.533322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.751 [2024-08-13 06:10:12.533345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:10.751 [2024-08-13 06:10:12.533356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.751 [2024-08-13 06:10:12.535430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.751 [2024-08-13 06:10:12.535474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.752 BaseBdev2 00:15:11.011 06:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:11.011 06:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:11.011 BaseBdev3_malloc 00:15:11.011 06:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:11.271 true 00:15:11.271 06:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:11.531 [2024-08-13 06:10:13.155669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:11.531 [2024-08-13 06:10:13.155730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.531 [2024-08-13 06:10:13.155748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:11.531 [2024-08-13 06:10:13.155758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.531 [2024-08-13 06:10:13.157716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.531 [2024-08-13 06:10:13.157840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:11.531 BaseBdev3 00:15:11.531 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:11.531 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:11.791 BaseBdev4_malloc 00:15:11.791 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:11.791 true 00:15:11.792 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:12.052 [2024-08-13 06:10:13.755373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:12.052 [2024-08-13 06:10:13.755516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.052 [2024-08-13 06:10:13.755555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:12.052 [2024-08-13 06:10:13.755587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.052 [2024-08-13 06:10:13.757596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.052 [2024-08-13 06:10:13.757677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:12.052 BaseBdev4 00:15:12.052 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:12.311 [2024-08-13 06:10:13.951219] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.311 [2024-08-13 06:10:13.952958] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.311 [2024-08-13 06:10:13.953088] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.311 [2024-08-13 06:10:13.953176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:12.311 [2024-08-13 06:10:13.953408] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:15:12.311 [2024-08-13 06:10:13.953462] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:12.311 [2024-08-13 06:10:13.953749] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:12.311 [2024-08-13 06:10:13.953930] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:15:12.311 [2024-08-13 06:10:13.953971] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:15:12.311 [2024-08-13 06:10:13.954168] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.311 06:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.570 06:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.570 "name": "raid_bdev1", 00:15:12.570 "uuid": "d60a1853-0261-47ca-ac66-8c4b69203695", 00:15:12.570 "strip_size_kb": 64, 00:15:12.570 "state": "online", 00:15:12.570 "raid_level": "concat", 00:15:12.570 "superblock": true, 00:15:12.570 "num_base_bdevs": 4, 00:15:12.570 "num_base_bdevs_discovered": 4, 00:15:12.570 "num_base_bdevs_operational": 4, 00:15:12.570 "base_bdevs_list": [ 00:15:12.570 { 00:15:12.570 "name": "BaseBdev1", 00:15:12.570 "uuid": "7c63481a-3387-5749-a6bd-f5a9b8a710ed", 00:15:12.570 "is_configured": true, 00:15:12.570 "data_offset": 2048, 00:15:12.570 "data_size": 63488 00:15:12.570 }, 00:15:12.570 { 00:15:12.570 "name": "BaseBdev2", 00:15:12.570 "uuid": "a2d48fba-bdf0-5a37-a12b-6ebbf25b3bf6", 00:15:12.570 "is_configured": true, 00:15:12.570 "data_offset": 2048, 00:15:12.570 "data_size": 63488 00:15:12.570 }, 00:15:12.570 { 00:15:12.570 "name": "BaseBdev3", 00:15:12.570 "uuid": "8cbac91c-3ed7-5fb1-9a00-99b973e6ef4e", 00:15:12.570 "is_configured": true, 00:15:12.570 "data_offset": 2048, 00:15:12.570 "data_size": 63488 00:15:12.570 }, 00:15:12.570 { 00:15:12.570 "name": "BaseBdev4", 00:15:12.570 "uuid": "41486b7a-ef32-56f3-924f-16b46516871a", 00:15:12.570 "is_configured": true, 00:15:12.570 "data_offset": 2048, 00:15:12.570 "data_size": 63488 00:15:12.570 } 00:15:12.570 ] 00:15:12.570 }' 00:15:12.570 06:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.570 06:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.139 06:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:13.139 06:10:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:13.139 [2024-08-13 06:10:14.802012] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:14.078 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.338 06:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.598 06:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.598 "name": "raid_bdev1", 00:15:14.598 "uuid": "d60a1853-0261-47ca-ac66-8c4b69203695", 00:15:14.598 "strip_size_kb": 64, 00:15:14.598 "state": "online", 00:15:14.598 "raid_level": "concat", 00:15:14.598 "superblock": true, 00:15:14.598 "num_base_bdevs": 4, 00:15:14.598 "num_base_bdevs_discovered": 4, 00:15:14.598 "num_base_bdevs_operational": 4, 00:15:14.598 "base_bdevs_list": [ 00:15:14.598 { 00:15:14.598 "name": "BaseBdev1", 00:15:14.598 "uuid": "7c63481a-3387-5749-a6bd-f5a9b8a710ed", 00:15:14.598 "is_configured": true, 00:15:14.598 "data_offset": 2048, 00:15:14.598 "data_size": 63488 00:15:14.598 }, 00:15:14.598 { 00:15:14.598 "name": "BaseBdev2", 00:15:14.598 "uuid": "a2d48fba-bdf0-5a37-a12b-6ebbf25b3bf6", 00:15:14.598 "is_configured": true, 00:15:14.598 "data_offset": 2048, 00:15:14.598 "data_size": 63488 00:15:14.598 }, 00:15:14.598 { 00:15:14.598 "name": "BaseBdev3", 00:15:14.598 "uuid": "8cbac91c-3ed7-5fb1-9a00-99b973e6ef4e", 00:15:14.598 "is_configured": true, 00:15:14.598 "data_offset": 2048, 00:15:14.598 "data_size": 63488 00:15:14.598 }, 00:15:14.598 { 00:15:14.598 "name": "BaseBdev4", 00:15:14.598 "uuid": "41486b7a-ef32-56f3-924f-16b46516871a", 00:15:14.598 "is_configured": true, 00:15:14.598 "data_offset": 2048, 00:15:14.598 "data_size": 63488 00:15:14.598 } 00:15:14.598 ] 00:15:14.598 }' 00:15:14.598 06:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.598 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:15.169 [2024-08-13 06:10:16.852136] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.169 [2024-08-13 06:10:16.852272] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.169 [2024-08-13 06:10:16.854433] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.169 [2024-08-13 06:10:16.854523] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.169 [2024-08-13 06:10:16.854579] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.169 [2024-08-13 06:10:16.854619] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:15:15.169 0 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 88551 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 88551 ']' 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 88551 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88551 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88551' 00:15:15.169 killing process with pid 88551 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 88551 00:15:15.169 [2024-08-13 06:10:16.911811] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.169 06:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 88551 00:15:15.169 [2024-08-13 06:10:16.946431] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.bbSc44MTbv 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:15:15.429 ************************************ 00:15:15.429 END TEST raid_read_error_test 00:15:15.429 ************************************ 00:15:15.429 00:15:15.429 real 0m6.681s 00:15:15.429 user 0m10.512s 00:15:15.429 sys 0m1.031s 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:15.429 06:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.689 06:10:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:15.690 06:10:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:15.690 06:10:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:15.690 06:10:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.690 ************************************ 00:15:15.690 START TEST raid_write_error_test 00:15:15.690 ************************************ 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 write 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.BI8SzmN8I7 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=88734 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 88734 /var/tmp/spdk-raid.sock 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 88734 ']' 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:15.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.690 06:10:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.690 [2024-08-13 06:10:17.372068] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:15:15.690 [2024-08-13 06:10:17.372193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88734 ] 00:15:15.950 [2024-08-13 06:10:17.496829] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.950 [2024-08-13 06:10:17.541009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.950 [2024-08-13 06:10:17.584097] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.950 [2024-08-13 06:10:17.584225] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.519 06:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.519 06:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:16.519 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:16.519 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.779 BaseBdev1_malloc 00:15:16.779 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:16.779 true 00:15:17.039 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:17.039 [2024-08-13 06:10:18.724114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:17.039 [2024-08-13 06:10:18.724183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.039 [2024-08-13 06:10:18.724204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:17.039 [2024-08-13 06:10:18.724219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.039 [2024-08-13 06:10:18.726264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.039 [2024-08-13 06:10:18.726309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.039 BaseBdev1 00:15:17.039 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:17.039 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:17.299 BaseBdev2_malloc 00:15:17.299 06:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:17.559 true 00:15:17.559 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:17.559 [2024-08-13 06:10:19.299871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:17.559 [2024-08-13 06:10:19.299945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.559 [2024-08-13 06:10:19.299966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:17.559 [2024-08-13 06:10:19.299976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.559 [2024-08-13 06:10:19.301967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.559 [2024-08-13 06:10:19.302105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:17.559 BaseBdev2 00:15:17.559 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:17.559 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:17.819 BaseBdev3_malloc 00:15:17.819 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:18.078 true 00:15:18.078 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:18.338 [2024-08-13 06:10:19.951121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:18.338 [2024-08-13 06:10:19.951258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.338 [2024-08-13 06:10:19.951289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:18.338 [2024-08-13 06:10:19.951317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.338 [2024-08-13 06:10:19.953240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.338 [2024-08-13 06:10:19.953316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.338 BaseBdev3 00:15:18.338 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:18.338 06:10:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:18.597 BaseBdev4_malloc 00:15:18.597 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:18.597 true 00:15:18.857 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:18.857 [2024-08-13 06:10:20.578611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:18.857 [2024-08-13 06:10:20.578729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.857 [2024-08-13 06:10:20.578760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:18.857 [2024-08-13 06:10:20.578787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.857 [2024-08-13 06:10:20.580645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.857 [2024-08-13 06:10:20.580720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:18.857 BaseBdev4 00:15:18.857 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:19.117 [2024-08-13 06:10:20.786304] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.117 [2024-08-13 06:10:20.787946] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.117 [2024-08-13 06:10:20.788069] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.117 [2024-08-13 06:10:20.788150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.117 [2024-08-13 06:10:20.788368] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:15:19.118 [2024-08-13 06:10:20.788419] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:19.118 [2024-08-13 06:10:20.788695] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:19.118 [2024-08-13 06:10:20.788864] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:15:19.118 [2024-08-13 06:10:20.788904] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:15:19.118 [2024-08-13 06:10:20.789079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.118 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.378 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.378 "name": "raid_bdev1", 00:15:19.378 "uuid": "1c8417a3-f76d-49b4-91cd-99b25030a901", 00:15:19.378 "strip_size_kb": 64, 00:15:19.378 "state": "online", 00:15:19.378 "raid_level": "concat", 00:15:19.378 "superblock": true, 00:15:19.378 "num_base_bdevs": 4, 00:15:19.378 "num_base_bdevs_discovered": 4, 00:15:19.378 "num_base_bdevs_operational": 4, 00:15:19.378 "base_bdevs_list": [ 00:15:19.378 { 00:15:19.378 "name": "BaseBdev1", 00:15:19.378 "uuid": "4077e82d-e684-5dd4-b984-2d30f90bf26b", 00:15:19.378 "is_configured": true, 00:15:19.378 "data_offset": 2048, 00:15:19.378 "data_size": 63488 00:15:19.378 }, 00:15:19.378 { 00:15:19.378 "name": "BaseBdev2", 00:15:19.378 "uuid": "9e722ffc-97f4-5a4c-9f1f-e99d1ccd1935", 00:15:19.378 "is_configured": true, 00:15:19.378 "data_offset": 2048, 00:15:19.378 "data_size": 63488 00:15:19.378 }, 00:15:19.378 { 00:15:19.378 "name": "BaseBdev3", 00:15:19.378 "uuid": "90dfdebb-a054-582b-bdcd-0bdeb808b498", 00:15:19.378 "is_configured": true, 00:15:19.378 "data_offset": 2048, 00:15:19.378 "data_size": 63488 00:15:19.378 }, 00:15:19.378 { 00:15:19.378 "name": "BaseBdev4", 00:15:19.378 "uuid": "1a9cdc6e-f4b5-56e5-9114-9becb6f8f33f", 00:15:19.378 "is_configured": true, 00:15:19.378 "data_offset": 2048, 00:15:19.378 "data_size": 63488 00:15:19.378 } 00:15:19.378 ] 00:15:19.378 }' 00:15:19.378 06:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.378 06:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.947 06:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:19.947 06:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:19.947 [2024-08-13 06:10:21.613421] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:20.888 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.148 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.408 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.408 "name": "raid_bdev1", 00:15:21.408 "uuid": "1c8417a3-f76d-49b4-91cd-99b25030a901", 00:15:21.408 "strip_size_kb": 64, 00:15:21.408 "state": "online", 00:15:21.408 "raid_level": "concat", 00:15:21.408 "superblock": true, 00:15:21.408 "num_base_bdevs": 4, 00:15:21.408 "num_base_bdevs_discovered": 4, 00:15:21.408 "num_base_bdevs_operational": 4, 00:15:21.408 "base_bdevs_list": [ 00:15:21.408 { 00:15:21.408 "name": "BaseBdev1", 00:15:21.408 "uuid": "4077e82d-e684-5dd4-b984-2d30f90bf26b", 00:15:21.408 "is_configured": true, 00:15:21.408 "data_offset": 2048, 00:15:21.408 "data_size": 63488 00:15:21.408 }, 00:15:21.408 { 00:15:21.408 "name": "BaseBdev2", 00:15:21.408 "uuid": "9e722ffc-97f4-5a4c-9f1f-e99d1ccd1935", 00:15:21.408 "is_configured": true, 00:15:21.408 "data_offset": 2048, 00:15:21.408 "data_size": 63488 00:15:21.408 }, 00:15:21.408 { 00:15:21.408 "name": "BaseBdev3", 00:15:21.408 "uuid": "90dfdebb-a054-582b-bdcd-0bdeb808b498", 00:15:21.408 "is_configured": true, 00:15:21.408 "data_offset": 2048, 00:15:21.408 "data_size": 63488 00:15:21.408 }, 00:15:21.408 { 00:15:21.408 "name": "BaseBdev4", 00:15:21.408 "uuid": "1a9cdc6e-f4b5-56e5-9114-9becb6f8f33f", 00:15:21.408 "is_configured": true, 00:15:21.408 "data_offset": 2048, 00:15:21.408 "data_size": 63488 00:15:21.408 } 00:15:21.408 ] 00:15:21.408 }' 00:15:21.408 06:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.408 06:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:21.993 [2024-08-13 06:10:23.664113] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.993 [2024-08-13 06:10:23.664235] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.993 [2024-08-13 06:10:23.666480] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.993 [2024-08-13 06:10:23.666572] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.993 [2024-08-13 06:10:23.666629] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.993 [2024-08-13 06:10:23.666669] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:15:21.993 0 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 88734 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 88734 ']' 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 88734 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88734 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88734' 00:15:21.993 killing process with pid 88734 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 88734 00:15:21.993 [2024-08-13 06:10:23.714683] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.993 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 88734 00:15:21.993 [2024-08-13 06:10:23.749475] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.BI8SzmN8I7 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:15:22.267 00:15:22.267 real 0m6.727s 00:15:22.267 user 0m10.615s 00:15:22.267 sys 0m1.025s 00:15:22.267 ************************************ 00:15:22.267 END TEST raid_write_error_test 00:15:22.267 ************************************ 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:22.267 06:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.267 06:10:24 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:22.267 06:10:24 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:22.267 06:10:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:22.267 06:10:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:22.267 06:10:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.527 ************************************ 00:15:22.527 START TEST raid_state_function_test 00:15:22.527 ************************************ 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=88909 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 88909' 00:15:22.527 Process raid pid: 88909 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 88909 /var/tmp/spdk-raid.sock 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 88909 ']' 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:22.527 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.527 [2024-08-13 06:10:24.175062] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:15:22.527 [2024-08-13 06:10:24.175278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.787 [2024-08-13 06:10:24.323121] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.787 [2024-08-13 06:10:24.372356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.787 [2024-08-13 06:10:24.415605] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.787 [2024-08-13 06:10:24.415719] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.356 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:23.356 06:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:23.356 06:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:23.616 [2024-08-13 06:10:25.175713] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.616 [2024-08-13 06:10:25.175826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.616 [2024-08-13 06:10:25.175841] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.616 [2024-08-13 06:10:25.175849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.616 [2024-08-13 06:10:25.175859] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.616 [2024-08-13 06:10:25.175866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.616 [2024-08-13 06:10:25.175875] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.616 [2024-08-13 06:10:25.175881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.616 "name": "Existed_Raid", 00:15:23.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.616 "strip_size_kb": 0, 00:15:23.616 "state": "configuring", 00:15:23.616 "raid_level": "raid1", 00:15:23.616 "superblock": false, 00:15:23.616 "num_base_bdevs": 4, 00:15:23.616 "num_base_bdevs_discovered": 0, 00:15:23.616 "num_base_bdevs_operational": 4, 00:15:23.616 "base_bdevs_list": [ 00:15:23.616 { 00:15:23.616 "name": "BaseBdev1", 00:15:23.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.616 "is_configured": false, 00:15:23.616 "data_offset": 0, 00:15:23.616 "data_size": 0 00:15:23.616 }, 00:15:23.616 { 00:15:23.616 "name": "BaseBdev2", 00:15:23.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.616 "is_configured": false, 00:15:23.616 "data_offset": 0, 00:15:23.616 "data_size": 0 00:15:23.616 }, 00:15:23.616 { 00:15:23.616 "name": "BaseBdev3", 00:15:23.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.616 "is_configured": false, 00:15:23.616 "data_offset": 0, 00:15:23.616 "data_size": 0 00:15:23.616 }, 00:15:23.616 { 00:15:23.616 "name": "BaseBdev4", 00:15:23.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.616 "is_configured": false, 00:15:23.616 "data_offset": 0, 00:15:23.616 "data_size": 0 00:15:23.616 } 00:15:23.616 ] 00:15:23.616 }' 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.616 06:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.556 06:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.556 [2024-08-13 06:10:26.161931] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.556 [2024-08-13 06:10:26.162061] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:24.556 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:24.815 [2024-08-13 06:10:26.357636] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.815 [2024-08-13 06:10:26.357735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.815 [2024-08-13 06:10:26.357765] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.815 [2024-08-13 06:10:26.357785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.815 [2024-08-13 06:10:26.357803] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.815 [2024-08-13 06:10:26.357821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.815 [2024-08-13 06:10:26.357839] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.815 [2024-08-13 06:10:26.357857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.815 [2024-08-13 06:10:26.530112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.815 BaseBdev1 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:24.815 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.075 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.334 [ 00:15:25.334 { 00:15:25.334 "name": "BaseBdev1", 00:15:25.334 "aliases": [ 00:15:25.334 "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6" 00:15:25.334 ], 00:15:25.334 "product_name": "Malloc disk", 00:15:25.334 "block_size": 512, 00:15:25.335 "num_blocks": 65536, 00:15:25.335 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:25.335 "assigned_rate_limits": { 00:15:25.335 "rw_ios_per_sec": 0, 00:15:25.335 "rw_mbytes_per_sec": 0, 00:15:25.335 "r_mbytes_per_sec": 0, 00:15:25.335 "w_mbytes_per_sec": 0 00:15:25.335 }, 00:15:25.335 "claimed": true, 00:15:25.335 "claim_type": "exclusive_write", 00:15:25.335 "zoned": false, 00:15:25.335 "supported_io_types": { 00:15:25.335 "read": true, 00:15:25.335 "write": true, 00:15:25.335 "unmap": true, 00:15:25.335 "flush": true, 00:15:25.335 "reset": true, 00:15:25.335 "nvme_admin": false, 00:15:25.335 "nvme_io": false, 00:15:25.335 "nvme_io_md": false, 00:15:25.335 "write_zeroes": true, 00:15:25.335 "zcopy": true, 00:15:25.335 "get_zone_info": false, 00:15:25.335 "zone_management": false, 00:15:25.335 "zone_append": false, 00:15:25.335 "compare": false, 00:15:25.335 "compare_and_write": false, 00:15:25.335 "abort": true, 00:15:25.335 "seek_hole": false, 00:15:25.335 "seek_data": false, 00:15:25.335 "copy": true, 00:15:25.335 "nvme_iov_md": false 00:15:25.335 }, 00:15:25.335 "memory_domains": [ 00:15:25.335 { 00:15:25.335 "dma_device_id": "system", 00:15:25.335 "dma_device_type": 1 00:15:25.335 }, 00:15:25.335 { 00:15:25.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.335 "dma_device_type": 2 00:15:25.335 } 00:15:25.335 ], 00:15:25.335 "driver_specific": {} 00:15:25.335 } 00:15:25.335 ] 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.335 06:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.594 06:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.594 "name": "Existed_Raid", 00:15:25.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.595 "strip_size_kb": 0, 00:15:25.595 "state": "configuring", 00:15:25.595 "raid_level": "raid1", 00:15:25.595 "superblock": false, 00:15:25.595 "num_base_bdevs": 4, 00:15:25.595 "num_base_bdevs_discovered": 1, 00:15:25.595 "num_base_bdevs_operational": 4, 00:15:25.595 "base_bdevs_list": [ 00:15:25.595 { 00:15:25.595 "name": "BaseBdev1", 00:15:25.595 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:25.595 "is_configured": true, 00:15:25.595 "data_offset": 0, 00:15:25.595 "data_size": 65536 00:15:25.595 }, 00:15:25.595 { 00:15:25.595 "name": "BaseBdev2", 00:15:25.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.595 "is_configured": false, 00:15:25.595 "data_offset": 0, 00:15:25.595 "data_size": 0 00:15:25.595 }, 00:15:25.595 { 00:15:25.595 "name": "BaseBdev3", 00:15:25.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.595 "is_configured": false, 00:15:25.595 "data_offset": 0, 00:15:25.595 "data_size": 0 00:15:25.595 }, 00:15:25.595 { 00:15:25.595 "name": "BaseBdev4", 00:15:25.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.595 "is_configured": false, 00:15:25.595 "data_offset": 0, 00:15:25.595 "data_size": 0 00:15:25.595 } 00:15:25.595 ] 00:15:25.595 }' 00:15:25.595 06:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.595 06:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.164 06:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.164 [2024-08-13 06:10:27.851950] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.164 [2024-08-13 06:10:27.852054] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:26.164 06:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:26.424 [2024-08-13 06:10:28.019697] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.424 [2024-08-13 06:10:28.021361] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.424 [2024-08-13 06:10:28.021429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.424 [2024-08-13 06:10:28.021456] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.424 [2024-08-13 06:10:28.021474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.424 [2024-08-13 06:10:28.021496] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.424 [2024-08-13 06:10:28.021513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.424 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.684 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.684 "name": "Existed_Raid", 00:15:26.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.684 "strip_size_kb": 0, 00:15:26.684 "state": "configuring", 00:15:26.684 "raid_level": "raid1", 00:15:26.684 "superblock": false, 00:15:26.684 "num_base_bdevs": 4, 00:15:26.684 "num_base_bdevs_discovered": 1, 00:15:26.684 "num_base_bdevs_operational": 4, 00:15:26.684 "base_bdevs_list": [ 00:15:26.684 { 00:15:26.684 "name": "BaseBdev1", 00:15:26.684 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:26.684 "is_configured": true, 00:15:26.684 "data_offset": 0, 00:15:26.684 "data_size": 65536 00:15:26.684 }, 00:15:26.684 { 00:15:26.684 "name": "BaseBdev2", 00:15:26.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.684 "is_configured": false, 00:15:26.684 "data_offset": 0, 00:15:26.684 "data_size": 0 00:15:26.684 }, 00:15:26.684 { 00:15:26.684 "name": "BaseBdev3", 00:15:26.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.684 "is_configured": false, 00:15:26.684 "data_offset": 0, 00:15:26.684 "data_size": 0 00:15:26.684 }, 00:15:26.684 { 00:15:26.684 "name": "BaseBdev4", 00:15:26.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.684 "is_configured": false, 00:15:26.684 "data_offset": 0, 00:15:26.684 "data_size": 0 00:15:26.684 } 00:15:26.684 ] 00:15:26.684 }' 00:15:26.684 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.684 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.253 [2024-08-13 06:10:28.931648] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.253 BaseBdev2 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:27.253 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:27.254 06:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.513 06:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.772 [ 00:15:27.772 { 00:15:27.772 "name": "BaseBdev2", 00:15:27.772 "aliases": [ 00:15:27.772 "c6492d53-d57c-409a-a7d4-29cd053a4ed5" 00:15:27.772 ], 00:15:27.772 "product_name": "Malloc disk", 00:15:27.772 "block_size": 512, 00:15:27.772 "num_blocks": 65536, 00:15:27.772 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:27.772 "assigned_rate_limits": { 00:15:27.772 "rw_ios_per_sec": 0, 00:15:27.772 "rw_mbytes_per_sec": 0, 00:15:27.772 "r_mbytes_per_sec": 0, 00:15:27.772 "w_mbytes_per_sec": 0 00:15:27.772 }, 00:15:27.772 "claimed": true, 00:15:27.772 "claim_type": "exclusive_write", 00:15:27.772 "zoned": false, 00:15:27.772 "supported_io_types": { 00:15:27.772 "read": true, 00:15:27.772 "write": true, 00:15:27.772 "unmap": true, 00:15:27.772 "flush": true, 00:15:27.772 "reset": true, 00:15:27.772 "nvme_admin": false, 00:15:27.772 "nvme_io": false, 00:15:27.772 "nvme_io_md": false, 00:15:27.772 "write_zeroes": true, 00:15:27.772 "zcopy": true, 00:15:27.772 "get_zone_info": false, 00:15:27.772 "zone_management": false, 00:15:27.772 "zone_append": false, 00:15:27.772 "compare": false, 00:15:27.772 "compare_and_write": false, 00:15:27.772 "abort": true, 00:15:27.772 "seek_hole": false, 00:15:27.772 "seek_data": false, 00:15:27.772 "copy": true, 00:15:27.772 "nvme_iov_md": false 00:15:27.772 }, 00:15:27.772 "memory_domains": [ 00:15:27.772 { 00:15:27.772 "dma_device_id": "system", 00:15:27.773 "dma_device_type": 1 00:15:27.773 }, 00:15:27.773 { 00:15:27.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.773 "dma_device_type": 2 00:15:27.773 } 00:15:27.773 ], 00:15:27.773 "driver_specific": {} 00:15:27.773 } 00:15:27.773 ] 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.773 "name": "Existed_Raid", 00:15:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.773 "strip_size_kb": 0, 00:15:27.773 "state": "configuring", 00:15:27.773 "raid_level": "raid1", 00:15:27.773 "superblock": false, 00:15:27.773 "num_base_bdevs": 4, 00:15:27.773 "num_base_bdevs_discovered": 2, 00:15:27.773 "num_base_bdevs_operational": 4, 00:15:27.773 "base_bdevs_list": [ 00:15:27.773 { 00:15:27.773 "name": "BaseBdev1", 00:15:27.773 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:27.773 "is_configured": true, 00:15:27.773 "data_offset": 0, 00:15:27.773 "data_size": 65536 00:15:27.773 }, 00:15:27.773 { 00:15:27.773 "name": "BaseBdev2", 00:15:27.773 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:27.773 "is_configured": true, 00:15:27.773 "data_offset": 0, 00:15:27.773 "data_size": 65536 00:15:27.773 }, 00:15:27.773 { 00:15:27.773 "name": "BaseBdev3", 00:15:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.773 "is_configured": false, 00:15:27.773 "data_offset": 0, 00:15:27.773 "data_size": 0 00:15:27.773 }, 00:15:27.773 { 00:15:27.773 "name": "BaseBdev4", 00:15:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.773 "is_configured": false, 00:15:27.773 "data_offset": 0, 00:15:27.773 "data_size": 0 00:15:27.773 } 00:15:27.773 ] 00:15:27.773 }' 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.773 06:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.602 [2024-08-13 06:10:30.216850] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.602 BaseBdev3 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:28.602 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.862 [ 00:15:28.862 { 00:15:28.862 "name": "BaseBdev3", 00:15:28.862 "aliases": [ 00:15:28.862 "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7" 00:15:28.862 ], 00:15:28.862 "product_name": "Malloc disk", 00:15:28.862 "block_size": 512, 00:15:28.862 "num_blocks": 65536, 00:15:28.862 "uuid": "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7", 00:15:28.862 "assigned_rate_limits": { 00:15:28.862 "rw_ios_per_sec": 0, 00:15:28.862 "rw_mbytes_per_sec": 0, 00:15:28.862 "r_mbytes_per_sec": 0, 00:15:28.862 "w_mbytes_per_sec": 0 00:15:28.862 }, 00:15:28.862 "claimed": true, 00:15:28.862 "claim_type": "exclusive_write", 00:15:28.862 "zoned": false, 00:15:28.862 "supported_io_types": { 00:15:28.862 "read": true, 00:15:28.862 "write": true, 00:15:28.862 "unmap": true, 00:15:28.862 "flush": true, 00:15:28.862 "reset": true, 00:15:28.862 "nvme_admin": false, 00:15:28.862 "nvme_io": false, 00:15:28.862 "nvme_io_md": false, 00:15:28.862 "write_zeroes": true, 00:15:28.862 "zcopy": true, 00:15:28.862 "get_zone_info": false, 00:15:28.862 "zone_management": false, 00:15:28.862 "zone_append": false, 00:15:28.862 "compare": false, 00:15:28.862 "compare_and_write": false, 00:15:28.862 "abort": true, 00:15:28.862 "seek_hole": false, 00:15:28.862 "seek_data": false, 00:15:28.862 "copy": true, 00:15:28.862 "nvme_iov_md": false 00:15:28.862 }, 00:15:28.862 "memory_domains": [ 00:15:28.862 { 00:15:28.862 "dma_device_id": "system", 00:15:28.862 "dma_device_type": 1 00:15:28.862 }, 00:15:28.862 { 00:15:28.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.862 "dma_device_type": 2 00:15:28.862 } 00:15:28.862 ], 00:15:28.862 "driver_specific": {} 00:15:28.862 } 00:15:28.862 ] 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.862 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.122 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.122 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.122 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.122 "name": "Existed_Raid", 00:15:29.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.122 "strip_size_kb": 0, 00:15:29.122 "state": "configuring", 00:15:29.122 "raid_level": "raid1", 00:15:29.122 "superblock": false, 00:15:29.122 "num_base_bdevs": 4, 00:15:29.122 "num_base_bdevs_discovered": 3, 00:15:29.122 "num_base_bdevs_operational": 4, 00:15:29.122 "base_bdevs_list": [ 00:15:29.122 { 00:15:29.122 "name": "BaseBdev1", 00:15:29.122 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:29.122 "is_configured": true, 00:15:29.122 "data_offset": 0, 00:15:29.122 "data_size": 65536 00:15:29.122 }, 00:15:29.122 { 00:15:29.122 "name": "BaseBdev2", 00:15:29.122 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:29.122 "is_configured": true, 00:15:29.122 "data_offset": 0, 00:15:29.122 "data_size": 65536 00:15:29.122 }, 00:15:29.122 { 00:15:29.122 "name": "BaseBdev3", 00:15:29.122 "uuid": "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7", 00:15:29.122 "is_configured": true, 00:15:29.122 "data_offset": 0, 00:15:29.122 "data_size": 65536 00:15:29.122 }, 00:15:29.122 { 00:15:29.122 "name": "BaseBdev4", 00:15:29.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.122 "is_configured": false, 00:15:29.122 "data_offset": 0, 00:15:29.122 "data_size": 0 00:15:29.122 } 00:15:29.122 ] 00:15:29.122 }' 00:15:29.122 06:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.122 06:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.692 06:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:29.951 [2024-08-13 06:10:31.617730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.951 [2024-08-13 06:10:31.617869] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:29.951 [2024-08-13 06:10:31.617884] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:29.951 [2024-08-13 06:10:31.618157] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:29.951 [2024-08-13 06:10:31.618315] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:29.951 [2024-08-13 06:10:31.618326] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:29.951 [2024-08-13 06:10:31.618512] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.951 BaseBdev4 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:29.951 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.211 06:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:30.471 [ 00:15:30.471 { 00:15:30.471 "name": "BaseBdev4", 00:15:30.471 "aliases": [ 00:15:30.471 "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1" 00:15:30.471 ], 00:15:30.471 "product_name": "Malloc disk", 00:15:30.471 "block_size": 512, 00:15:30.471 "num_blocks": 65536, 00:15:30.471 "uuid": "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1", 00:15:30.471 "assigned_rate_limits": { 00:15:30.471 "rw_ios_per_sec": 0, 00:15:30.471 "rw_mbytes_per_sec": 0, 00:15:30.471 "r_mbytes_per_sec": 0, 00:15:30.471 "w_mbytes_per_sec": 0 00:15:30.471 }, 00:15:30.471 "claimed": true, 00:15:30.471 "claim_type": "exclusive_write", 00:15:30.471 "zoned": false, 00:15:30.471 "supported_io_types": { 00:15:30.471 "read": true, 00:15:30.471 "write": true, 00:15:30.471 "unmap": true, 00:15:30.471 "flush": true, 00:15:30.471 "reset": true, 00:15:30.471 "nvme_admin": false, 00:15:30.471 "nvme_io": false, 00:15:30.471 "nvme_io_md": false, 00:15:30.471 "write_zeroes": true, 00:15:30.471 "zcopy": true, 00:15:30.471 "get_zone_info": false, 00:15:30.471 "zone_management": false, 00:15:30.471 "zone_append": false, 00:15:30.471 "compare": false, 00:15:30.471 "compare_and_write": false, 00:15:30.471 "abort": true, 00:15:30.471 "seek_hole": false, 00:15:30.471 "seek_data": false, 00:15:30.471 "copy": true, 00:15:30.471 "nvme_iov_md": false 00:15:30.471 }, 00:15:30.471 "memory_domains": [ 00:15:30.471 { 00:15:30.471 "dma_device_id": "system", 00:15:30.471 "dma_device_type": 1 00:15:30.471 }, 00:15:30.471 { 00:15:30.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.471 "dma_device_type": 2 00:15:30.471 } 00:15:30.471 ], 00:15:30.471 "driver_specific": {} 00:15:30.471 } 00:15:30.471 ] 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.471 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.730 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.730 "name": "Existed_Raid", 00:15:30.730 "uuid": "cee4b92c-d481-478c-b379-cd1a5468e246", 00:15:30.730 "strip_size_kb": 0, 00:15:30.730 "state": "online", 00:15:30.730 "raid_level": "raid1", 00:15:30.730 "superblock": false, 00:15:30.730 "num_base_bdevs": 4, 00:15:30.730 "num_base_bdevs_discovered": 4, 00:15:30.730 "num_base_bdevs_operational": 4, 00:15:30.730 "base_bdevs_list": [ 00:15:30.730 { 00:15:30.730 "name": "BaseBdev1", 00:15:30.730 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:30.730 "is_configured": true, 00:15:30.730 "data_offset": 0, 00:15:30.730 "data_size": 65536 00:15:30.730 }, 00:15:30.730 { 00:15:30.730 "name": "BaseBdev2", 00:15:30.730 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:30.730 "is_configured": true, 00:15:30.730 "data_offset": 0, 00:15:30.730 "data_size": 65536 00:15:30.730 }, 00:15:30.730 { 00:15:30.730 "name": "BaseBdev3", 00:15:30.730 "uuid": "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7", 00:15:30.730 "is_configured": true, 00:15:30.730 "data_offset": 0, 00:15:30.730 "data_size": 65536 00:15:30.730 }, 00:15:30.730 { 00:15:30.730 "name": "BaseBdev4", 00:15:30.730 "uuid": "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1", 00:15:30.730 "is_configured": true, 00:15:30.730 "data_offset": 0, 00:15:30.730 "data_size": 65536 00:15:30.730 } 00:15:30.730 ] 00:15:30.730 }' 00:15:30.730 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.730 06:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:31.298 06:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:31.298 [2024-08-13 06:10:33.015620] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.298 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:31.298 "name": "Existed_Raid", 00:15:31.298 "aliases": [ 00:15:31.298 "cee4b92c-d481-478c-b379-cd1a5468e246" 00:15:31.298 ], 00:15:31.298 "product_name": "Raid Volume", 00:15:31.298 "block_size": 512, 00:15:31.298 "num_blocks": 65536, 00:15:31.298 "uuid": "cee4b92c-d481-478c-b379-cd1a5468e246", 00:15:31.298 "assigned_rate_limits": { 00:15:31.298 "rw_ios_per_sec": 0, 00:15:31.298 "rw_mbytes_per_sec": 0, 00:15:31.298 "r_mbytes_per_sec": 0, 00:15:31.298 "w_mbytes_per_sec": 0 00:15:31.298 }, 00:15:31.298 "claimed": false, 00:15:31.298 "zoned": false, 00:15:31.298 "supported_io_types": { 00:15:31.298 "read": true, 00:15:31.298 "write": true, 00:15:31.298 "unmap": false, 00:15:31.298 "flush": false, 00:15:31.298 "reset": true, 00:15:31.298 "nvme_admin": false, 00:15:31.298 "nvme_io": false, 00:15:31.298 "nvme_io_md": false, 00:15:31.298 "write_zeroes": true, 00:15:31.298 "zcopy": false, 00:15:31.298 "get_zone_info": false, 00:15:31.298 "zone_management": false, 00:15:31.298 "zone_append": false, 00:15:31.298 "compare": false, 00:15:31.298 "compare_and_write": false, 00:15:31.298 "abort": false, 00:15:31.298 "seek_hole": false, 00:15:31.298 "seek_data": false, 00:15:31.298 "copy": false, 00:15:31.298 "nvme_iov_md": false 00:15:31.298 }, 00:15:31.298 "memory_domains": [ 00:15:31.298 { 00:15:31.298 "dma_device_id": "system", 00:15:31.298 "dma_device_type": 1 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.298 "dma_device_type": 2 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "system", 00:15:31.298 "dma_device_type": 1 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.298 "dma_device_type": 2 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "system", 00:15:31.298 "dma_device_type": 1 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.298 "dma_device_type": 2 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "system", 00:15:31.298 "dma_device_type": 1 00:15:31.298 }, 00:15:31.298 { 00:15:31.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.298 "dma_device_type": 2 00:15:31.298 } 00:15:31.298 ], 00:15:31.298 "driver_specific": { 00:15:31.298 "raid": { 00:15:31.298 "uuid": "cee4b92c-d481-478c-b379-cd1a5468e246", 00:15:31.298 "strip_size_kb": 0, 00:15:31.298 "state": "online", 00:15:31.299 "raid_level": "raid1", 00:15:31.299 "superblock": false, 00:15:31.299 "num_base_bdevs": 4, 00:15:31.299 "num_base_bdevs_discovered": 4, 00:15:31.299 "num_base_bdevs_operational": 4, 00:15:31.299 "base_bdevs_list": [ 00:15:31.299 { 00:15:31.299 "name": "BaseBdev1", 00:15:31.299 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:31.299 "is_configured": true, 00:15:31.299 "data_offset": 0, 00:15:31.299 "data_size": 65536 00:15:31.299 }, 00:15:31.299 { 00:15:31.299 "name": "BaseBdev2", 00:15:31.299 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:31.299 "is_configured": true, 00:15:31.299 "data_offset": 0, 00:15:31.299 "data_size": 65536 00:15:31.299 }, 00:15:31.299 { 00:15:31.299 "name": "BaseBdev3", 00:15:31.299 "uuid": "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7", 00:15:31.299 "is_configured": true, 00:15:31.299 "data_offset": 0, 00:15:31.299 "data_size": 65536 00:15:31.299 }, 00:15:31.299 { 00:15:31.299 "name": "BaseBdev4", 00:15:31.299 "uuid": "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1", 00:15:31.299 "is_configured": true, 00:15:31.299 "data_offset": 0, 00:15:31.299 "data_size": 65536 00:15:31.299 } 00:15:31.299 ] 00:15:31.299 } 00:15:31.299 } 00:15:31.299 }' 00:15:31.299 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.299 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:31.299 BaseBdev2 00:15:31.299 BaseBdev3 00:15:31.299 BaseBdev4' 00:15:31.299 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.299 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:31.299 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:31.558 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:31.558 "name": "BaseBdev1", 00:15:31.558 "aliases": [ 00:15:31.558 "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6" 00:15:31.558 ], 00:15:31.558 "product_name": "Malloc disk", 00:15:31.558 "block_size": 512, 00:15:31.558 "num_blocks": 65536, 00:15:31.558 "uuid": "d85f08b8-77c0-4fe9-8fd3-db874b3eddd6", 00:15:31.558 "assigned_rate_limits": { 00:15:31.558 "rw_ios_per_sec": 0, 00:15:31.558 "rw_mbytes_per_sec": 0, 00:15:31.558 "r_mbytes_per_sec": 0, 00:15:31.558 "w_mbytes_per_sec": 0 00:15:31.558 }, 00:15:31.558 "claimed": true, 00:15:31.558 "claim_type": "exclusive_write", 00:15:31.558 "zoned": false, 00:15:31.558 "supported_io_types": { 00:15:31.558 "read": true, 00:15:31.558 "write": true, 00:15:31.558 "unmap": true, 00:15:31.558 "flush": true, 00:15:31.558 "reset": true, 00:15:31.558 "nvme_admin": false, 00:15:31.558 "nvme_io": false, 00:15:31.558 "nvme_io_md": false, 00:15:31.558 "write_zeroes": true, 00:15:31.558 "zcopy": true, 00:15:31.558 "get_zone_info": false, 00:15:31.558 "zone_management": false, 00:15:31.558 "zone_append": false, 00:15:31.558 "compare": false, 00:15:31.558 "compare_and_write": false, 00:15:31.558 "abort": true, 00:15:31.558 "seek_hole": false, 00:15:31.558 "seek_data": false, 00:15:31.558 "copy": true, 00:15:31.558 "nvme_iov_md": false 00:15:31.558 }, 00:15:31.558 "memory_domains": [ 00:15:31.558 { 00:15:31.558 "dma_device_id": "system", 00:15:31.558 "dma_device_type": 1 00:15:31.558 }, 00:15:31.558 { 00:15:31.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.558 "dma_device_type": 2 00:15:31.558 } 00:15:31.558 ], 00:15:31.558 "driver_specific": {} 00:15:31.558 }' 00:15:31.558 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.558 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.817 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.075 "name": "BaseBdev2", 00:15:32.075 "aliases": [ 00:15:32.075 "c6492d53-d57c-409a-a7d4-29cd053a4ed5" 00:15:32.075 ], 00:15:32.075 "product_name": "Malloc disk", 00:15:32.075 "block_size": 512, 00:15:32.075 "num_blocks": 65536, 00:15:32.075 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:32.075 "assigned_rate_limits": { 00:15:32.075 "rw_ios_per_sec": 0, 00:15:32.075 "rw_mbytes_per_sec": 0, 00:15:32.075 "r_mbytes_per_sec": 0, 00:15:32.075 "w_mbytes_per_sec": 0 00:15:32.075 }, 00:15:32.075 "claimed": true, 00:15:32.075 "claim_type": "exclusive_write", 00:15:32.075 "zoned": false, 00:15:32.075 "supported_io_types": { 00:15:32.075 "read": true, 00:15:32.075 "write": true, 00:15:32.075 "unmap": true, 00:15:32.075 "flush": true, 00:15:32.075 "reset": true, 00:15:32.075 "nvme_admin": false, 00:15:32.075 "nvme_io": false, 00:15:32.075 "nvme_io_md": false, 00:15:32.075 "write_zeroes": true, 00:15:32.075 "zcopy": true, 00:15:32.075 "get_zone_info": false, 00:15:32.075 "zone_management": false, 00:15:32.075 "zone_append": false, 00:15:32.075 "compare": false, 00:15:32.075 "compare_and_write": false, 00:15:32.075 "abort": true, 00:15:32.075 "seek_hole": false, 00:15:32.075 "seek_data": false, 00:15:32.075 "copy": true, 00:15:32.075 "nvme_iov_md": false 00:15:32.075 }, 00:15:32.075 "memory_domains": [ 00:15:32.075 { 00:15:32.075 "dma_device_id": "system", 00:15:32.075 "dma_device_type": 1 00:15:32.075 }, 00:15:32.075 { 00:15:32.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.075 "dma_device_type": 2 00:15:32.075 } 00:15:32.075 ], 00:15:32.075 "driver_specific": {} 00:15:32.075 }' 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.075 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.406 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.406 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.406 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.406 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.406 06:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:32.406 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.665 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.666 "name": "BaseBdev3", 00:15:32.666 "aliases": [ 00:15:32.666 "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7" 00:15:32.666 ], 00:15:32.666 "product_name": "Malloc disk", 00:15:32.666 "block_size": 512, 00:15:32.666 "num_blocks": 65536, 00:15:32.666 "uuid": "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7", 00:15:32.666 "assigned_rate_limits": { 00:15:32.666 "rw_ios_per_sec": 0, 00:15:32.666 "rw_mbytes_per_sec": 0, 00:15:32.666 "r_mbytes_per_sec": 0, 00:15:32.666 "w_mbytes_per_sec": 0 00:15:32.666 }, 00:15:32.666 "claimed": true, 00:15:32.666 "claim_type": "exclusive_write", 00:15:32.666 "zoned": false, 00:15:32.666 "supported_io_types": { 00:15:32.666 "read": true, 00:15:32.666 "write": true, 00:15:32.666 "unmap": true, 00:15:32.666 "flush": true, 00:15:32.666 "reset": true, 00:15:32.666 "nvme_admin": false, 00:15:32.666 "nvme_io": false, 00:15:32.666 "nvme_io_md": false, 00:15:32.666 "write_zeroes": true, 00:15:32.666 "zcopy": true, 00:15:32.666 "get_zone_info": false, 00:15:32.666 "zone_management": false, 00:15:32.666 "zone_append": false, 00:15:32.666 "compare": false, 00:15:32.666 "compare_and_write": false, 00:15:32.666 "abort": true, 00:15:32.666 "seek_hole": false, 00:15:32.666 "seek_data": false, 00:15:32.666 "copy": true, 00:15:32.666 "nvme_iov_md": false 00:15:32.666 }, 00:15:32.666 "memory_domains": [ 00:15:32.666 { 00:15:32.666 "dma_device_id": "system", 00:15:32.666 "dma_device_type": 1 00:15:32.666 }, 00:15:32.666 { 00:15:32.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.666 "dma_device_type": 2 00:15:32.666 } 00:15:32.666 ], 00:15:32.666 "driver_specific": {} 00:15:32.666 }' 00:15:32.666 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.666 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.666 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.666 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:32.925 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.185 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.185 "name": "BaseBdev4", 00:15:33.185 "aliases": [ 00:15:33.185 "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1" 00:15:33.185 ], 00:15:33.185 "product_name": "Malloc disk", 00:15:33.185 "block_size": 512, 00:15:33.185 "num_blocks": 65536, 00:15:33.185 "uuid": "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1", 00:15:33.185 "assigned_rate_limits": { 00:15:33.185 "rw_ios_per_sec": 0, 00:15:33.185 "rw_mbytes_per_sec": 0, 00:15:33.185 "r_mbytes_per_sec": 0, 00:15:33.185 "w_mbytes_per_sec": 0 00:15:33.185 }, 00:15:33.185 "claimed": true, 00:15:33.185 "claim_type": "exclusive_write", 00:15:33.185 "zoned": false, 00:15:33.185 "supported_io_types": { 00:15:33.185 "read": true, 00:15:33.185 "write": true, 00:15:33.185 "unmap": true, 00:15:33.185 "flush": true, 00:15:33.185 "reset": true, 00:15:33.185 "nvme_admin": false, 00:15:33.185 "nvme_io": false, 00:15:33.185 "nvme_io_md": false, 00:15:33.185 "write_zeroes": true, 00:15:33.185 "zcopy": true, 00:15:33.185 "get_zone_info": false, 00:15:33.185 "zone_management": false, 00:15:33.185 "zone_append": false, 00:15:33.185 "compare": false, 00:15:33.185 "compare_and_write": false, 00:15:33.185 "abort": true, 00:15:33.185 "seek_hole": false, 00:15:33.185 "seek_data": false, 00:15:33.185 "copy": true, 00:15:33.185 "nvme_iov_md": false 00:15:33.185 }, 00:15:33.185 "memory_domains": [ 00:15:33.185 { 00:15:33.185 "dma_device_id": "system", 00:15:33.185 "dma_device_type": 1 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.185 "dma_device_type": 2 00:15:33.185 } 00:15:33.185 ], 00:15:33.185 "driver_specific": {} 00:15:33.185 }' 00:15:33.185 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.185 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.444 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.444 06:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.444 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:33.703 [2024-08-13 06:10:35.415777] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.703 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.962 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.962 "name": "Existed_Raid", 00:15:33.962 "uuid": "cee4b92c-d481-478c-b379-cd1a5468e246", 00:15:33.962 "strip_size_kb": 0, 00:15:33.962 "state": "online", 00:15:33.962 "raid_level": "raid1", 00:15:33.962 "superblock": false, 00:15:33.962 "num_base_bdevs": 4, 00:15:33.962 "num_base_bdevs_discovered": 3, 00:15:33.962 "num_base_bdevs_operational": 3, 00:15:33.962 "base_bdevs_list": [ 00:15:33.962 { 00:15:33.962 "name": null, 00:15:33.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.962 "is_configured": false, 00:15:33.962 "data_offset": 0, 00:15:33.962 "data_size": 65536 00:15:33.962 }, 00:15:33.962 { 00:15:33.962 "name": "BaseBdev2", 00:15:33.962 "uuid": "c6492d53-d57c-409a-a7d4-29cd053a4ed5", 00:15:33.962 "is_configured": true, 00:15:33.962 "data_offset": 0, 00:15:33.962 "data_size": 65536 00:15:33.962 }, 00:15:33.962 { 00:15:33.962 "name": "BaseBdev3", 00:15:33.962 "uuid": "9dd1561a-773c-44b6-b3a0-5749d4eeb1f7", 00:15:33.962 "is_configured": true, 00:15:33.962 "data_offset": 0, 00:15:33.962 "data_size": 65536 00:15:33.962 }, 00:15:33.962 { 00:15:33.962 "name": "BaseBdev4", 00:15:33.962 "uuid": "2c9f03ad-fa46-4b38-b5f9-2aee9ceea6c1", 00:15:33.962 "is_configured": true, 00:15:33.962 "data_offset": 0, 00:15:33.962 "data_size": 65536 00:15:33.962 } 00:15:33.962 ] 00:15:33.962 }' 00:15:33.962 06:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.962 06:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.531 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:34.531 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:34.531 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.531 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:34.790 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:34.790 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.790 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:35.049 [2024-08-13 06:10:36.620726] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.049 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:35.049 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:35.049 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.049 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:35.308 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:35.308 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.308 06:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:35.308 [2024-08-13 06:10:37.023088] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.308 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:35.308 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:35.308 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:35.308 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.568 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:35.568 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.568 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:35.826 [2024-08-13 06:10:37.421233] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:35.826 [2024-08-13 06:10:37.421410] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.826 [2024-08-13 06:10:37.432968] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.826 [2024-08-13 06:10:37.433135] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.826 [2024-08-13 06:10:37.433156] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:35.826 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:35.826 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:35.826 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.826 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.086 BaseBdev2 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:36.086 06:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.345 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.604 [ 00:15:36.604 { 00:15:36.604 "name": "BaseBdev2", 00:15:36.604 "aliases": [ 00:15:36.604 "0e9bfdaf-1535-47b7-b53c-46257c5d0461" 00:15:36.604 ], 00:15:36.604 "product_name": "Malloc disk", 00:15:36.604 "block_size": 512, 00:15:36.604 "num_blocks": 65536, 00:15:36.604 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:36.604 "assigned_rate_limits": { 00:15:36.604 "rw_ios_per_sec": 0, 00:15:36.604 "rw_mbytes_per_sec": 0, 00:15:36.604 "r_mbytes_per_sec": 0, 00:15:36.604 "w_mbytes_per_sec": 0 00:15:36.604 }, 00:15:36.604 "claimed": false, 00:15:36.604 "zoned": false, 00:15:36.604 "supported_io_types": { 00:15:36.604 "read": true, 00:15:36.604 "write": true, 00:15:36.604 "unmap": true, 00:15:36.604 "flush": true, 00:15:36.604 "reset": true, 00:15:36.604 "nvme_admin": false, 00:15:36.604 "nvme_io": false, 00:15:36.604 "nvme_io_md": false, 00:15:36.604 "write_zeroes": true, 00:15:36.604 "zcopy": true, 00:15:36.604 "get_zone_info": false, 00:15:36.604 "zone_management": false, 00:15:36.604 "zone_append": false, 00:15:36.604 "compare": false, 00:15:36.604 "compare_and_write": false, 00:15:36.604 "abort": true, 00:15:36.604 "seek_hole": false, 00:15:36.604 "seek_data": false, 00:15:36.604 "copy": true, 00:15:36.604 "nvme_iov_md": false 00:15:36.604 }, 00:15:36.604 "memory_domains": [ 00:15:36.604 { 00:15:36.604 "dma_device_id": "system", 00:15:36.604 "dma_device_type": 1 00:15:36.604 }, 00:15:36.604 { 00:15:36.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.604 "dma_device_type": 2 00:15:36.604 } 00:15:36.604 ], 00:15:36.604 "driver_specific": {} 00:15:36.604 } 00:15:36.604 ] 00:15:36.604 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:36.604 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:36.604 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:36.604 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.864 BaseBdev3 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:36.864 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.124 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.124 [ 00:15:37.124 { 00:15:37.124 "name": "BaseBdev3", 00:15:37.124 "aliases": [ 00:15:37.124 "d1889801-967b-45f7-b258-07c8d3736be4" 00:15:37.124 ], 00:15:37.124 "product_name": "Malloc disk", 00:15:37.124 "block_size": 512, 00:15:37.124 "num_blocks": 65536, 00:15:37.124 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:37.124 "assigned_rate_limits": { 00:15:37.124 "rw_ios_per_sec": 0, 00:15:37.124 "rw_mbytes_per_sec": 0, 00:15:37.124 "r_mbytes_per_sec": 0, 00:15:37.124 "w_mbytes_per_sec": 0 00:15:37.124 }, 00:15:37.124 "claimed": false, 00:15:37.124 "zoned": false, 00:15:37.124 "supported_io_types": { 00:15:37.124 "read": true, 00:15:37.124 "write": true, 00:15:37.124 "unmap": true, 00:15:37.124 "flush": true, 00:15:37.124 "reset": true, 00:15:37.124 "nvme_admin": false, 00:15:37.124 "nvme_io": false, 00:15:37.124 "nvme_io_md": false, 00:15:37.124 "write_zeroes": true, 00:15:37.124 "zcopy": true, 00:15:37.124 "get_zone_info": false, 00:15:37.124 "zone_management": false, 00:15:37.124 "zone_append": false, 00:15:37.124 "compare": false, 00:15:37.124 "compare_and_write": false, 00:15:37.124 "abort": true, 00:15:37.124 "seek_hole": false, 00:15:37.124 "seek_data": false, 00:15:37.124 "copy": true, 00:15:37.124 "nvme_iov_md": false 00:15:37.124 }, 00:15:37.124 "memory_domains": [ 00:15:37.124 { 00:15:37.124 "dma_device_id": "system", 00:15:37.124 "dma_device_type": 1 00:15:37.124 }, 00:15:37.124 { 00:15:37.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.124 "dma_device_type": 2 00:15:37.124 } 00:15:37.124 ], 00:15:37.124 "driver_specific": {} 00:15:37.124 } 00:15:37.124 ] 00:15:37.124 06:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:37.124 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:37.124 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:37.124 06:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.384 BaseBdev4 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:37.384 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.643 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.902 [ 00:15:37.902 { 00:15:37.902 "name": "BaseBdev4", 00:15:37.902 "aliases": [ 00:15:37.902 "afc43dbf-af75-42d8-abd6-673629aa747f" 00:15:37.902 ], 00:15:37.902 "product_name": "Malloc disk", 00:15:37.902 "block_size": 512, 00:15:37.902 "num_blocks": 65536, 00:15:37.902 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:37.902 "assigned_rate_limits": { 00:15:37.902 "rw_ios_per_sec": 0, 00:15:37.902 "rw_mbytes_per_sec": 0, 00:15:37.902 "r_mbytes_per_sec": 0, 00:15:37.902 "w_mbytes_per_sec": 0 00:15:37.902 }, 00:15:37.902 "claimed": false, 00:15:37.902 "zoned": false, 00:15:37.902 "supported_io_types": { 00:15:37.902 "read": true, 00:15:37.902 "write": true, 00:15:37.902 "unmap": true, 00:15:37.902 "flush": true, 00:15:37.902 "reset": true, 00:15:37.902 "nvme_admin": false, 00:15:37.902 "nvme_io": false, 00:15:37.902 "nvme_io_md": false, 00:15:37.902 "write_zeroes": true, 00:15:37.902 "zcopy": true, 00:15:37.902 "get_zone_info": false, 00:15:37.902 "zone_management": false, 00:15:37.902 "zone_append": false, 00:15:37.902 "compare": false, 00:15:37.902 "compare_and_write": false, 00:15:37.902 "abort": true, 00:15:37.902 "seek_hole": false, 00:15:37.902 "seek_data": false, 00:15:37.902 "copy": true, 00:15:37.902 "nvme_iov_md": false 00:15:37.902 }, 00:15:37.902 "memory_domains": [ 00:15:37.902 { 00:15:37.902 "dma_device_id": "system", 00:15:37.902 "dma_device_type": 1 00:15:37.902 }, 00:15:37.902 { 00:15:37.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.902 "dma_device_type": 2 00:15:37.902 } 00:15:37.902 ], 00:15:37.902 "driver_specific": {} 00:15:37.902 } 00:15:37.902 ] 00:15:37.903 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:37.903 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:37.903 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:37.903 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:37.903 [2024-08-13 06:10:39.682071] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.903 [2024-08-13 06:10:39.682167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.903 [2024-08-13 06:10:39.682222] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.903 [2024-08-13 06:10:39.683974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.903 [2024-08-13 06:10:39.684081] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.162 "name": "Existed_Raid", 00:15:38.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.162 "strip_size_kb": 0, 00:15:38.162 "state": "configuring", 00:15:38.162 "raid_level": "raid1", 00:15:38.162 "superblock": false, 00:15:38.162 "num_base_bdevs": 4, 00:15:38.162 "num_base_bdevs_discovered": 3, 00:15:38.162 "num_base_bdevs_operational": 4, 00:15:38.162 "base_bdevs_list": [ 00:15:38.162 { 00:15:38.162 "name": "BaseBdev1", 00:15:38.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.162 "is_configured": false, 00:15:38.162 "data_offset": 0, 00:15:38.162 "data_size": 0 00:15:38.162 }, 00:15:38.162 { 00:15:38.162 "name": "BaseBdev2", 00:15:38.162 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:38.162 "is_configured": true, 00:15:38.162 "data_offset": 0, 00:15:38.162 "data_size": 65536 00:15:38.162 }, 00:15:38.162 { 00:15:38.162 "name": "BaseBdev3", 00:15:38.162 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:38.162 "is_configured": true, 00:15:38.162 "data_offset": 0, 00:15:38.162 "data_size": 65536 00:15:38.162 }, 00:15:38.162 { 00:15:38.162 "name": "BaseBdev4", 00:15:38.162 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:38.162 "is_configured": true, 00:15:38.162 "data_offset": 0, 00:15:38.162 "data_size": 65536 00:15:38.162 } 00:15:38.162 ] 00:15:38.162 }' 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.162 06:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.729 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:38.988 [2024-08-13 06:10:40.676349] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.988 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.247 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.247 "name": "Existed_Raid", 00:15:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.247 "strip_size_kb": 0, 00:15:39.247 "state": "configuring", 00:15:39.247 "raid_level": "raid1", 00:15:39.247 "superblock": false, 00:15:39.247 "num_base_bdevs": 4, 00:15:39.247 "num_base_bdevs_discovered": 2, 00:15:39.247 "num_base_bdevs_operational": 4, 00:15:39.247 "base_bdevs_list": [ 00:15:39.247 { 00:15:39.247 "name": "BaseBdev1", 00:15:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.247 "is_configured": false, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 0 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "name": null, 00:15:39.247 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:39.247 "is_configured": false, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 65536 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "name": "BaseBdev3", 00:15:39.247 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:39.247 "is_configured": true, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 65536 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "name": "BaseBdev4", 00:15:39.247 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:39.247 "is_configured": true, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 65536 00:15:39.247 } 00:15:39.247 ] 00:15:39.247 }' 00:15:39.247 06:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.247 06:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.815 06:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.815 06:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.075 [2024-08-13 06:10:41.809332] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.075 BaseBdev1 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:40.075 06:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.335 06:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.595 [ 00:15:40.595 { 00:15:40.595 "name": "BaseBdev1", 00:15:40.595 "aliases": [ 00:15:40.595 "b834b6bd-7799-4993-995b-a66437f11133" 00:15:40.595 ], 00:15:40.595 "product_name": "Malloc disk", 00:15:40.595 "block_size": 512, 00:15:40.595 "num_blocks": 65536, 00:15:40.595 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:40.595 "assigned_rate_limits": { 00:15:40.595 "rw_ios_per_sec": 0, 00:15:40.595 "rw_mbytes_per_sec": 0, 00:15:40.595 "r_mbytes_per_sec": 0, 00:15:40.595 "w_mbytes_per_sec": 0 00:15:40.595 }, 00:15:40.595 "claimed": true, 00:15:40.595 "claim_type": "exclusive_write", 00:15:40.595 "zoned": false, 00:15:40.595 "supported_io_types": { 00:15:40.595 "read": true, 00:15:40.595 "write": true, 00:15:40.595 "unmap": true, 00:15:40.595 "flush": true, 00:15:40.595 "reset": true, 00:15:40.595 "nvme_admin": false, 00:15:40.595 "nvme_io": false, 00:15:40.595 "nvme_io_md": false, 00:15:40.595 "write_zeroes": true, 00:15:40.595 "zcopy": true, 00:15:40.595 "get_zone_info": false, 00:15:40.595 "zone_management": false, 00:15:40.595 "zone_append": false, 00:15:40.595 "compare": false, 00:15:40.595 "compare_and_write": false, 00:15:40.595 "abort": true, 00:15:40.595 "seek_hole": false, 00:15:40.595 "seek_data": false, 00:15:40.595 "copy": true, 00:15:40.595 "nvme_iov_md": false 00:15:40.595 }, 00:15:40.595 "memory_domains": [ 00:15:40.595 { 00:15:40.595 "dma_device_id": "system", 00:15:40.595 "dma_device_type": 1 00:15:40.595 }, 00:15:40.595 { 00:15:40.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.595 "dma_device_type": 2 00:15:40.595 } 00:15:40.595 ], 00:15:40.595 "driver_specific": {} 00:15:40.595 } 00:15:40.595 ] 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.595 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.855 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.855 "name": "Existed_Raid", 00:15:40.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.855 "strip_size_kb": 0, 00:15:40.855 "state": "configuring", 00:15:40.855 "raid_level": "raid1", 00:15:40.855 "superblock": false, 00:15:40.855 "num_base_bdevs": 4, 00:15:40.855 "num_base_bdevs_discovered": 3, 00:15:40.855 "num_base_bdevs_operational": 4, 00:15:40.855 "base_bdevs_list": [ 00:15:40.855 { 00:15:40.855 "name": "BaseBdev1", 00:15:40.855 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:40.855 "is_configured": true, 00:15:40.855 "data_offset": 0, 00:15:40.855 "data_size": 65536 00:15:40.855 }, 00:15:40.855 { 00:15:40.855 "name": null, 00:15:40.855 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:40.855 "is_configured": false, 00:15:40.855 "data_offset": 0, 00:15:40.855 "data_size": 65536 00:15:40.855 }, 00:15:40.855 { 00:15:40.855 "name": "BaseBdev3", 00:15:40.855 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:40.855 "is_configured": true, 00:15:40.855 "data_offset": 0, 00:15:40.855 "data_size": 65536 00:15:40.855 }, 00:15:40.855 { 00:15:40.855 "name": "BaseBdev4", 00:15:40.855 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:40.855 "is_configured": true, 00:15:40.855 "data_offset": 0, 00:15:40.855 "data_size": 65536 00:15:40.855 } 00:15:40.855 ] 00:15:40.855 }' 00:15:40.855 06:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.855 06:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.426 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.426 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.426 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:41.426 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:41.686 [2024-08-13 06:10:43.362856] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.686 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.946 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.946 "name": "Existed_Raid", 00:15:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.946 "strip_size_kb": 0, 00:15:41.946 "state": "configuring", 00:15:41.946 "raid_level": "raid1", 00:15:41.946 "superblock": false, 00:15:41.946 "num_base_bdevs": 4, 00:15:41.947 "num_base_bdevs_discovered": 2, 00:15:41.947 "num_base_bdevs_operational": 4, 00:15:41.947 "base_bdevs_list": [ 00:15:41.947 { 00:15:41.947 "name": "BaseBdev1", 00:15:41.947 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:41.947 "is_configured": true, 00:15:41.947 "data_offset": 0, 00:15:41.947 "data_size": 65536 00:15:41.947 }, 00:15:41.947 { 00:15:41.947 "name": null, 00:15:41.947 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:41.947 "is_configured": false, 00:15:41.947 "data_offset": 0, 00:15:41.947 "data_size": 65536 00:15:41.947 }, 00:15:41.947 { 00:15:41.947 "name": null, 00:15:41.947 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:41.947 "is_configured": false, 00:15:41.947 "data_offset": 0, 00:15:41.947 "data_size": 65536 00:15:41.947 }, 00:15:41.947 { 00:15:41.947 "name": "BaseBdev4", 00:15:41.947 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:41.947 "is_configured": true, 00:15:41.947 "data_offset": 0, 00:15:41.947 "data_size": 65536 00:15:41.947 } 00:15:41.947 ] 00:15:41.947 }' 00:15:41.947 06:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.947 06:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.516 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.516 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:42.775 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:42.775 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:42.775 [2024-08-13 06:10:44.536982] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.776 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.036 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.036 "name": "Existed_Raid", 00:15:43.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.036 "strip_size_kb": 0, 00:15:43.036 "state": "configuring", 00:15:43.036 "raid_level": "raid1", 00:15:43.036 "superblock": false, 00:15:43.036 "num_base_bdevs": 4, 00:15:43.036 "num_base_bdevs_discovered": 3, 00:15:43.036 "num_base_bdevs_operational": 4, 00:15:43.036 "base_bdevs_list": [ 00:15:43.036 { 00:15:43.036 "name": "BaseBdev1", 00:15:43.036 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:43.036 "is_configured": true, 00:15:43.036 "data_offset": 0, 00:15:43.036 "data_size": 65536 00:15:43.036 }, 00:15:43.036 { 00:15:43.036 "name": null, 00:15:43.036 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:43.036 "is_configured": false, 00:15:43.036 "data_offset": 0, 00:15:43.036 "data_size": 65536 00:15:43.036 }, 00:15:43.036 { 00:15:43.036 "name": "BaseBdev3", 00:15:43.036 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:43.036 "is_configured": true, 00:15:43.036 "data_offset": 0, 00:15:43.036 "data_size": 65536 00:15:43.036 }, 00:15:43.036 { 00:15:43.036 "name": "BaseBdev4", 00:15:43.036 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:43.036 "is_configured": true, 00:15:43.036 "data_offset": 0, 00:15:43.036 "data_size": 65536 00:15:43.036 } 00:15:43.036 ] 00:15:43.036 }' 00:15:43.036 06:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.036 06:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.609 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.609 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.879 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:43.879 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:44.161 [2024-08-13 06:10:45.683173] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.161 "name": "Existed_Raid", 00:15:44.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.161 "strip_size_kb": 0, 00:15:44.161 "state": "configuring", 00:15:44.161 "raid_level": "raid1", 00:15:44.161 "superblock": false, 00:15:44.161 "num_base_bdevs": 4, 00:15:44.161 "num_base_bdevs_discovered": 2, 00:15:44.161 "num_base_bdevs_operational": 4, 00:15:44.161 "base_bdevs_list": [ 00:15:44.161 { 00:15:44.161 "name": null, 00:15:44.161 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:44.161 "is_configured": false, 00:15:44.161 "data_offset": 0, 00:15:44.161 "data_size": 65536 00:15:44.161 }, 00:15:44.161 { 00:15:44.161 "name": null, 00:15:44.161 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:44.161 "is_configured": false, 00:15:44.161 "data_offset": 0, 00:15:44.161 "data_size": 65536 00:15:44.161 }, 00:15:44.161 { 00:15:44.161 "name": "BaseBdev3", 00:15:44.161 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:44.161 "is_configured": true, 00:15:44.161 "data_offset": 0, 00:15:44.161 "data_size": 65536 00:15:44.161 }, 00:15:44.161 { 00:15:44.161 "name": "BaseBdev4", 00:15:44.161 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:44.161 "is_configured": true, 00:15:44.161 "data_offset": 0, 00:15:44.161 "data_size": 65536 00:15:44.161 } 00:15:44.161 ] 00:15:44.161 }' 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.161 06:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.745 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.745 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.004 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:45.004 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:45.264 [2024-08-13 06:10:46.848845] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.264 06:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.524 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.524 "name": "Existed_Raid", 00:15:45.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.524 "strip_size_kb": 0, 00:15:45.524 "state": "configuring", 00:15:45.524 "raid_level": "raid1", 00:15:45.524 "superblock": false, 00:15:45.524 "num_base_bdevs": 4, 00:15:45.524 "num_base_bdevs_discovered": 3, 00:15:45.524 "num_base_bdevs_operational": 4, 00:15:45.524 "base_bdevs_list": [ 00:15:45.524 { 00:15:45.524 "name": null, 00:15:45.524 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:45.524 "is_configured": false, 00:15:45.524 "data_offset": 0, 00:15:45.524 "data_size": 65536 00:15:45.524 }, 00:15:45.524 { 00:15:45.524 "name": "BaseBdev2", 00:15:45.524 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:45.524 "is_configured": true, 00:15:45.524 "data_offset": 0, 00:15:45.524 "data_size": 65536 00:15:45.524 }, 00:15:45.524 { 00:15:45.524 "name": "BaseBdev3", 00:15:45.524 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:45.524 "is_configured": true, 00:15:45.524 "data_offset": 0, 00:15:45.524 "data_size": 65536 00:15:45.524 }, 00:15:45.524 { 00:15:45.524 "name": "BaseBdev4", 00:15:45.524 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:45.524 "is_configured": true, 00:15:45.524 "data_offset": 0, 00:15:45.524 "data_size": 65536 00:15:45.524 } 00:15:45.524 ] 00:15:45.524 }' 00:15:45.524 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.524 06:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.094 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.094 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.094 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:46.094 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.094 06:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:46.354 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b834b6bd-7799-4993-995b-a66437f11133 00:15:46.614 [2024-08-13 06:10:48.266979] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:46.614 [2024-08-13 06:10:48.267170] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:46.614 [2024-08-13 06:10:48.267205] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:46.614 [2024-08-13 06:10:48.267546] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:46.614 [2024-08-13 06:10:48.267733] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:46.614 [2024-08-13 06:10:48.267778] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:46.614 [2024-08-13 06:10:48.268011] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.614 NewBaseBdev 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:46.614 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.874 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:46.874 [ 00:15:46.874 { 00:15:46.874 "name": "NewBaseBdev", 00:15:46.874 "aliases": [ 00:15:46.874 "b834b6bd-7799-4993-995b-a66437f11133" 00:15:46.874 ], 00:15:46.874 "product_name": "Malloc disk", 00:15:46.874 "block_size": 512, 00:15:46.874 "num_blocks": 65536, 00:15:46.874 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:46.874 "assigned_rate_limits": { 00:15:46.874 "rw_ios_per_sec": 0, 00:15:46.874 "rw_mbytes_per_sec": 0, 00:15:46.874 "r_mbytes_per_sec": 0, 00:15:46.874 "w_mbytes_per_sec": 0 00:15:46.874 }, 00:15:46.874 "claimed": true, 00:15:46.874 "claim_type": "exclusive_write", 00:15:46.874 "zoned": false, 00:15:46.874 "supported_io_types": { 00:15:46.874 "read": true, 00:15:46.874 "write": true, 00:15:46.874 "unmap": true, 00:15:46.874 "flush": true, 00:15:46.874 "reset": true, 00:15:46.874 "nvme_admin": false, 00:15:46.874 "nvme_io": false, 00:15:46.874 "nvme_io_md": false, 00:15:46.874 "write_zeroes": true, 00:15:46.874 "zcopy": true, 00:15:46.874 "get_zone_info": false, 00:15:46.874 "zone_management": false, 00:15:46.874 "zone_append": false, 00:15:46.874 "compare": false, 00:15:46.874 "compare_and_write": false, 00:15:46.874 "abort": true, 00:15:46.874 "seek_hole": false, 00:15:46.874 "seek_data": false, 00:15:46.874 "copy": true, 00:15:46.874 "nvme_iov_md": false 00:15:46.874 }, 00:15:46.874 "memory_domains": [ 00:15:46.874 { 00:15:46.874 "dma_device_id": "system", 00:15:46.874 "dma_device_type": 1 00:15:46.874 }, 00:15:46.874 { 00:15:46.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.874 "dma_device_type": 2 00:15:46.874 } 00:15:46.874 ], 00:15:46.874 "driver_specific": {} 00:15:46.874 } 00:15:46.874 ] 00:15:46.874 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:46.874 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:46.874 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.874 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.874 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.875 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.875 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.875 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.875 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.875 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.875 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.135 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.135 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.135 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.135 "name": "Existed_Raid", 00:15:47.135 "uuid": "9c112017-a2e5-4f0b-b4ec-161a3379bb21", 00:15:47.135 "strip_size_kb": 0, 00:15:47.135 "state": "online", 00:15:47.135 "raid_level": "raid1", 00:15:47.135 "superblock": false, 00:15:47.135 "num_base_bdevs": 4, 00:15:47.135 "num_base_bdevs_discovered": 4, 00:15:47.135 "num_base_bdevs_operational": 4, 00:15:47.135 "base_bdevs_list": [ 00:15:47.135 { 00:15:47.135 "name": "NewBaseBdev", 00:15:47.135 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:47.135 "is_configured": true, 00:15:47.135 "data_offset": 0, 00:15:47.135 "data_size": 65536 00:15:47.135 }, 00:15:47.135 { 00:15:47.135 "name": "BaseBdev2", 00:15:47.135 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:47.135 "is_configured": true, 00:15:47.135 "data_offset": 0, 00:15:47.135 "data_size": 65536 00:15:47.135 }, 00:15:47.135 { 00:15:47.135 "name": "BaseBdev3", 00:15:47.135 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:47.135 "is_configured": true, 00:15:47.135 "data_offset": 0, 00:15:47.135 "data_size": 65536 00:15:47.135 }, 00:15:47.135 { 00:15:47.135 "name": "BaseBdev4", 00:15:47.135 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:47.135 "is_configured": true, 00:15:47.135 "data_offset": 0, 00:15:47.135 "data_size": 65536 00:15:47.135 } 00:15:47.135 ] 00:15:47.135 }' 00:15:47.135 06:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.135 06:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:47.704 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:47.964 [2024-08-13 06:10:49.616978] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.964 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.964 "name": "Existed_Raid", 00:15:47.964 "aliases": [ 00:15:47.964 "9c112017-a2e5-4f0b-b4ec-161a3379bb21" 00:15:47.964 ], 00:15:47.964 "product_name": "Raid Volume", 00:15:47.964 "block_size": 512, 00:15:47.964 "num_blocks": 65536, 00:15:47.964 "uuid": "9c112017-a2e5-4f0b-b4ec-161a3379bb21", 00:15:47.964 "assigned_rate_limits": { 00:15:47.964 "rw_ios_per_sec": 0, 00:15:47.964 "rw_mbytes_per_sec": 0, 00:15:47.964 "r_mbytes_per_sec": 0, 00:15:47.964 "w_mbytes_per_sec": 0 00:15:47.964 }, 00:15:47.964 "claimed": false, 00:15:47.964 "zoned": false, 00:15:47.964 "supported_io_types": { 00:15:47.964 "read": true, 00:15:47.964 "write": true, 00:15:47.964 "unmap": false, 00:15:47.964 "flush": false, 00:15:47.964 "reset": true, 00:15:47.964 "nvme_admin": false, 00:15:47.964 "nvme_io": false, 00:15:47.964 "nvme_io_md": false, 00:15:47.964 "write_zeroes": true, 00:15:47.964 "zcopy": false, 00:15:47.964 "get_zone_info": false, 00:15:47.964 "zone_management": false, 00:15:47.964 "zone_append": false, 00:15:47.964 "compare": false, 00:15:47.964 "compare_and_write": false, 00:15:47.964 "abort": false, 00:15:47.964 "seek_hole": false, 00:15:47.964 "seek_data": false, 00:15:47.964 "copy": false, 00:15:47.964 "nvme_iov_md": false 00:15:47.964 }, 00:15:47.964 "memory_domains": [ 00:15:47.964 { 00:15:47.964 "dma_device_id": "system", 00:15:47.964 "dma_device_type": 1 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.964 "dma_device_type": 2 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "system", 00:15:47.964 "dma_device_type": 1 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.964 "dma_device_type": 2 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "system", 00:15:47.964 "dma_device_type": 1 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.964 "dma_device_type": 2 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "system", 00:15:47.964 "dma_device_type": 1 00:15:47.964 }, 00:15:47.964 { 00:15:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.964 "dma_device_type": 2 00:15:47.964 } 00:15:47.964 ], 00:15:47.964 "driver_specific": { 00:15:47.964 "raid": { 00:15:47.964 "uuid": "9c112017-a2e5-4f0b-b4ec-161a3379bb21", 00:15:47.964 "strip_size_kb": 0, 00:15:47.964 "state": "online", 00:15:47.964 "raid_level": "raid1", 00:15:47.964 "superblock": false, 00:15:47.964 "num_base_bdevs": 4, 00:15:47.964 "num_base_bdevs_discovered": 4, 00:15:47.964 "num_base_bdevs_operational": 4, 00:15:47.964 "base_bdevs_list": [ 00:15:47.964 { 00:15:47.965 "name": "NewBaseBdev", 00:15:47.965 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:47.965 "is_configured": true, 00:15:47.965 "data_offset": 0, 00:15:47.965 "data_size": 65536 00:15:47.965 }, 00:15:47.965 { 00:15:47.965 "name": "BaseBdev2", 00:15:47.965 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:47.965 "is_configured": true, 00:15:47.965 "data_offset": 0, 00:15:47.965 "data_size": 65536 00:15:47.965 }, 00:15:47.965 { 00:15:47.965 "name": "BaseBdev3", 00:15:47.965 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:47.965 "is_configured": true, 00:15:47.965 "data_offset": 0, 00:15:47.965 "data_size": 65536 00:15:47.965 }, 00:15:47.965 { 00:15:47.965 "name": "BaseBdev4", 00:15:47.965 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:47.965 "is_configured": true, 00:15:47.965 "data_offset": 0, 00:15:47.965 "data_size": 65536 00:15:47.965 } 00:15:47.965 ] 00:15:47.965 } 00:15:47.965 } 00:15:47.965 }' 00:15:47.965 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.965 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:47.965 BaseBdev2 00:15:47.965 BaseBdev3 00:15:47.965 BaseBdev4' 00:15:47.965 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.965 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:47.965 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.225 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.225 "name": "NewBaseBdev", 00:15:48.225 "aliases": [ 00:15:48.225 "b834b6bd-7799-4993-995b-a66437f11133" 00:15:48.225 ], 00:15:48.225 "product_name": "Malloc disk", 00:15:48.225 "block_size": 512, 00:15:48.225 "num_blocks": 65536, 00:15:48.225 "uuid": "b834b6bd-7799-4993-995b-a66437f11133", 00:15:48.225 "assigned_rate_limits": { 00:15:48.225 "rw_ios_per_sec": 0, 00:15:48.225 "rw_mbytes_per_sec": 0, 00:15:48.225 "r_mbytes_per_sec": 0, 00:15:48.225 "w_mbytes_per_sec": 0 00:15:48.225 }, 00:15:48.225 "claimed": true, 00:15:48.225 "claim_type": "exclusive_write", 00:15:48.225 "zoned": false, 00:15:48.225 "supported_io_types": { 00:15:48.225 "read": true, 00:15:48.225 "write": true, 00:15:48.225 "unmap": true, 00:15:48.225 "flush": true, 00:15:48.225 "reset": true, 00:15:48.225 "nvme_admin": false, 00:15:48.225 "nvme_io": false, 00:15:48.225 "nvme_io_md": false, 00:15:48.225 "write_zeroes": true, 00:15:48.225 "zcopy": true, 00:15:48.225 "get_zone_info": false, 00:15:48.225 "zone_management": false, 00:15:48.225 "zone_append": false, 00:15:48.225 "compare": false, 00:15:48.225 "compare_and_write": false, 00:15:48.225 "abort": true, 00:15:48.225 "seek_hole": false, 00:15:48.225 "seek_data": false, 00:15:48.225 "copy": true, 00:15:48.225 "nvme_iov_md": false 00:15:48.225 }, 00:15:48.225 "memory_domains": [ 00:15:48.225 { 00:15:48.225 "dma_device_id": "system", 00:15:48.225 "dma_device_type": 1 00:15:48.225 }, 00:15:48.225 { 00:15:48.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.225 "dma_device_type": 2 00:15:48.225 } 00:15:48.225 ], 00:15:48.225 "driver_specific": {} 00:15:48.225 }' 00:15:48.225 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.225 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.225 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.225 06:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.485 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:48.745 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.745 "name": "BaseBdev2", 00:15:48.745 "aliases": [ 00:15:48.745 "0e9bfdaf-1535-47b7-b53c-46257c5d0461" 00:15:48.745 ], 00:15:48.745 "product_name": "Malloc disk", 00:15:48.745 "block_size": 512, 00:15:48.745 "num_blocks": 65536, 00:15:48.745 "uuid": "0e9bfdaf-1535-47b7-b53c-46257c5d0461", 00:15:48.745 "assigned_rate_limits": { 00:15:48.745 "rw_ios_per_sec": 0, 00:15:48.745 "rw_mbytes_per_sec": 0, 00:15:48.745 "r_mbytes_per_sec": 0, 00:15:48.745 "w_mbytes_per_sec": 0 00:15:48.745 }, 00:15:48.745 "claimed": true, 00:15:48.745 "claim_type": "exclusive_write", 00:15:48.745 "zoned": false, 00:15:48.745 "supported_io_types": { 00:15:48.745 "read": true, 00:15:48.745 "write": true, 00:15:48.745 "unmap": true, 00:15:48.745 "flush": true, 00:15:48.745 "reset": true, 00:15:48.745 "nvme_admin": false, 00:15:48.745 "nvme_io": false, 00:15:48.745 "nvme_io_md": false, 00:15:48.745 "write_zeroes": true, 00:15:48.745 "zcopy": true, 00:15:48.745 "get_zone_info": false, 00:15:48.745 "zone_management": false, 00:15:48.745 "zone_append": false, 00:15:48.745 "compare": false, 00:15:48.745 "compare_and_write": false, 00:15:48.745 "abort": true, 00:15:48.745 "seek_hole": false, 00:15:48.745 "seek_data": false, 00:15:48.745 "copy": true, 00:15:48.745 "nvme_iov_md": false 00:15:48.745 }, 00:15:48.745 "memory_domains": [ 00:15:48.745 { 00:15:48.745 "dma_device_id": "system", 00:15:48.745 "dma_device_type": 1 00:15:48.745 }, 00:15:48.745 { 00:15:48.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.745 "dma_device_type": 2 00:15:48.745 } 00:15:48.745 ], 00:15:48.745 "driver_specific": {} 00:15:48.745 }' 00:15:48.745 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.745 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.005 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.265 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.265 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.265 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:49.265 06:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.265 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.265 "name": "BaseBdev3", 00:15:49.265 "aliases": [ 00:15:49.265 "d1889801-967b-45f7-b258-07c8d3736be4" 00:15:49.265 ], 00:15:49.265 "product_name": "Malloc disk", 00:15:49.265 "block_size": 512, 00:15:49.265 "num_blocks": 65536, 00:15:49.265 "uuid": "d1889801-967b-45f7-b258-07c8d3736be4", 00:15:49.265 "assigned_rate_limits": { 00:15:49.265 "rw_ios_per_sec": 0, 00:15:49.265 "rw_mbytes_per_sec": 0, 00:15:49.265 "r_mbytes_per_sec": 0, 00:15:49.265 "w_mbytes_per_sec": 0 00:15:49.265 }, 00:15:49.265 "claimed": true, 00:15:49.265 "claim_type": "exclusive_write", 00:15:49.265 "zoned": false, 00:15:49.265 "supported_io_types": { 00:15:49.265 "read": true, 00:15:49.265 "write": true, 00:15:49.265 "unmap": true, 00:15:49.265 "flush": true, 00:15:49.265 "reset": true, 00:15:49.265 "nvme_admin": false, 00:15:49.265 "nvme_io": false, 00:15:49.265 "nvme_io_md": false, 00:15:49.265 "write_zeroes": true, 00:15:49.265 "zcopy": true, 00:15:49.265 "get_zone_info": false, 00:15:49.265 "zone_management": false, 00:15:49.265 "zone_append": false, 00:15:49.265 "compare": false, 00:15:49.265 "compare_and_write": false, 00:15:49.265 "abort": true, 00:15:49.265 "seek_hole": false, 00:15:49.265 "seek_data": false, 00:15:49.265 "copy": true, 00:15:49.265 "nvme_iov_md": false 00:15:49.265 }, 00:15:49.265 "memory_domains": [ 00:15:49.265 { 00:15:49.265 "dma_device_id": "system", 00:15:49.265 "dma_device_type": 1 00:15:49.265 }, 00:15:49.265 { 00:15:49.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.265 "dma_device_type": 2 00:15:49.265 } 00:15:49.265 ], 00:15:49.265 "driver_specific": {} 00:15:49.265 }' 00:15:49.265 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.265 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.525 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.785 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.785 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.785 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.785 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.785 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:50.046 "name": "BaseBdev4", 00:15:50.046 "aliases": [ 00:15:50.046 "afc43dbf-af75-42d8-abd6-673629aa747f" 00:15:50.046 ], 00:15:50.046 "product_name": "Malloc disk", 00:15:50.046 "block_size": 512, 00:15:50.046 "num_blocks": 65536, 00:15:50.046 "uuid": "afc43dbf-af75-42d8-abd6-673629aa747f", 00:15:50.046 "assigned_rate_limits": { 00:15:50.046 "rw_ios_per_sec": 0, 00:15:50.046 "rw_mbytes_per_sec": 0, 00:15:50.046 "r_mbytes_per_sec": 0, 00:15:50.046 "w_mbytes_per_sec": 0 00:15:50.046 }, 00:15:50.046 "claimed": true, 00:15:50.046 "claim_type": "exclusive_write", 00:15:50.046 "zoned": false, 00:15:50.046 "supported_io_types": { 00:15:50.046 "read": true, 00:15:50.046 "write": true, 00:15:50.046 "unmap": true, 00:15:50.046 "flush": true, 00:15:50.046 "reset": true, 00:15:50.046 "nvme_admin": false, 00:15:50.046 "nvme_io": false, 00:15:50.046 "nvme_io_md": false, 00:15:50.046 "write_zeroes": true, 00:15:50.046 "zcopy": true, 00:15:50.046 "get_zone_info": false, 00:15:50.046 "zone_management": false, 00:15:50.046 "zone_append": false, 00:15:50.046 "compare": false, 00:15:50.046 "compare_and_write": false, 00:15:50.046 "abort": true, 00:15:50.046 "seek_hole": false, 00:15:50.046 "seek_data": false, 00:15:50.046 "copy": true, 00:15:50.046 "nvme_iov_md": false 00:15:50.046 }, 00:15:50.046 "memory_domains": [ 00:15:50.046 { 00:15:50.046 "dma_device_id": "system", 00:15:50.046 "dma_device_type": 1 00:15:50.046 }, 00:15:50.046 { 00:15:50.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.046 "dma_device_type": 2 00:15:50.046 } 00:15:50.046 ], 00:15:50.046 "driver_specific": {} 00:15:50.046 }' 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.046 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.306 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.306 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.306 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.306 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.306 06:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.566 [2024-08-13 06:10:52.124472] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.566 [2024-08-13 06:10:52.124512] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.566 [2024-08-13 06:10:52.124620] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.566 [2024-08-13 06:10:52.124913] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.566 [2024-08-13 06:10:52.124923] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 88909 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 88909 ']' 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 88909 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88909 00:15:50.566 killing process with pid 88909 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88909' 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 88909 00:15:50.566 [2024-08-13 06:10:52.185491] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.566 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 88909 00:15:50.566 [2024-08-13 06:10:52.260889] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:51.137 00:15:51.137 real 0m28.561s 00:15:51.137 user 0m52.648s 00:15:51.137 sys 0m4.683s 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.137 ************************************ 00:15:51.137 END TEST raid_state_function_test 00:15:51.137 ************************************ 00:15:51.137 06:10:52 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:51.137 06:10:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:51.137 06:10:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.137 06:10:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.137 ************************************ 00:15:51.137 START TEST raid_state_function_test_sb 00:15:51.137 ************************************ 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:51.137 Process raid pid: 89924 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=89924 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 89924' 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 89924 /var/tmp/spdk-raid.sock 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 89924 ']' 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:51.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:51.137 06:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.137 [2024-08-13 06:10:52.817536] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:15:51.137 [2024-08-13 06:10:52.817685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.397 [2024-08-13 06:10:52.966020] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.397 [2024-08-13 06:10:53.011748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.397 [2024-08-13 06:10:53.055093] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.397 [2024-08-13 06:10:53.055123] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.966 06:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:51.966 06:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:51.966 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:52.225 [2024-08-13 06:10:53.794993] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.225 [2024-08-13 06:10:53.795062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.225 [2024-08-13 06:10:53.795075] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.225 [2024-08-13 06:10:53.795083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.225 [2024-08-13 06:10:53.795109] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.225 [2024-08-13 06:10:53.795115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.225 [2024-08-13 06:10:53.795124] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.225 [2024-08-13 06:10:53.795131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.225 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.225 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.225 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:52.225 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:52.225 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.226 06:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.226 06:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.226 "name": "Existed_Raid", 00:15:52.226 "uuid": "78535c7b-93bb-4c4b-b264-8bb2eae527cc", 00:15:52.226 "strip_size_kb": 0, 00:15:52.226 "state": "configuring", 00:15:52.226 "raid_level": "raid1", 00:15:52.226 "superblock": true, 00:15:52.226 "num_base_bdevs": 4, 00:15:52.226 "num_base_bdevs_discovered": 0, 00:15:52.226 "num_base_bdevs_operational": 4, 00:15:52.226 "base_bdevs_list": [ 00:15:52.226 { 00:15:52.226 "name": "BaseBdev1", 00:15:52.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.226 "is_configured": false, 00:15:52.226 "data_offset": 0, 00:15:52.226 "data_size": 0 00:15:52.226 }, 00:15:52.226 { 00:15:52.226 "name": "BaseBdev2", 00:15:52.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.226 "is_configured": false, 00:15:52.226 "data_offset": 0, 00:15:52.226 "data_size": 0 00:15:52.226 }, 00:15:52.226 { 00:15:52.226 "name": "BaseBdev3", 00:15:52.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.226 "is_configured": false, 00:15:52.226 "data_offset": 0, 00:15:52.226 "data_size": 0 00:15:52.226 }, 00:15:52.226 { 00:15:52.226 "name": "BaseBdev4", 00:15:52.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.226 "is_configured": false, 00:15:52.226 "data_offset": 0, 00:15:52.226 "data_size": 0 00:15:52.226 } 00:15:52.226 ] 00:15:52.226 }' 00:15:52.226 06:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.226 06:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.796 06:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:53.055 [2024-08-13 06:10:54.709268] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.055 [2024-08-13 06:10:54.709361] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:53.055 06:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:53.315 [2024-08-13 06:10:54.900984] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.315 [2024-08-13 06:10:54.901111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.315 [2024-08-13 06:10:54.901160] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.315 [2024-08-13 06:10:54.901180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.315 [2024-08-13 06:10:54.901199] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:53.315 [2024-08-13 06:10:54.901217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:53.315 [2024-08-13 06:10:54.901235] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:53.315 [2024-08-13 06:10:54.901253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:53.315 06:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:53.315 [2024-08-13 06:10:55.097432] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.315 BaseBdev1 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.574 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.832 [ 00:15:53.832 { 00:15:53.832 "name": "BaseBdev1", 00:15:53.832 "aliases": [ 00:15:53.832 "4ca0e405-b4da-4a73-aea1-2d4081a07648" 00:15:53.832 ], 00:15:53.832 "product_name": "Malloc disk", 00:15:53.832 "block_size": 512, 00:15:53.832 "num_blocks": 65536, 00:15:53.832 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:53.832 "assigned_rate_limits": { 00:15:53.832 "rw_ios_per_sec": 0, 00:15:53.832 "rw_mbytes_per_sec": 0, 00:15:53.832 "r_mbytes_per_sec": 0, 00:15:53.832 "w_mbytes_per_sec": 0 00:15:53.832 }, 00:15:53.832 "claimed": true, 00:15:53.832 "claim_type": "exclusive_write", 00:15:53.832 "zoned": false, 00:15:53.832 "supported_io_types": { 00:15:53.832 "read": true, 00:15:53.832 "write": true, 00:15:53.832 "unmap": true, 00:15:53.832 "flush": true, 00:15:53.832 "reset": true, 00:15:53.832 "nvme_admin": false, 00:15:53.833 "nvme_io": false, 00:15:53.833 "nvme_io_md": false, 00:15:53.833 "write_zeroes": true, 00:15:53.833 "zcopy": true, 00:15:53.833 "get_zone_info": false, 00:15:53.833 "zone_management": false, 00:15:53.833 "zone_append": false, 00:15:53.833 "compare": false, 00:15:53.833 "compare_and_write": false, 00:15:53.833 "abort": true, 00:15:53.833 "seek_hole": false, 00:15:53.833 "seek_data": false, 00:15:53.833 "copy": true, 00:15:53.833 "nvme_iov_md": false 00:15:53.833 }, 00:15:53.833 "memory_domains": [ 00:15:53.833 { 00:15:53.833 "dma_device_id": "system", 00:15:53.833 "dma_device_type": 1 00:15:53.833 }, 00:15:53.833 { 00:15:53.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.833 "dma_device_type": 2 00:15:53.833 } 00:15:53.833 ], 00:15:53.833 "driver_specific": {} 00:15:53.833 } 00:15:53.833 ] 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.833 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.093 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.093 "name": "Existed_Raid", 00:15:54.093 "uuid": "b5f57da6-b206-4145-9d10-5d375d9ac939", 00:15:54.093 "strip_size_kb": 0, 00:15:54.093 "state": "configuring", 00:15:54.093 "raid_level": "raid1", 00:15:54.093 "superblock": true, 00:15:54.093 "num_base_bdevs": 4, 00:15:54.093 "num_base_bdevs_discovered": 1, 00:15:54.093 "num_base_bdevs_operational": 4, 00:15:54.093 "base_bdevs_list": [ 00:15:54.093 { 00:15:54.093 "name": "BaseBdev1", 00:15:54.093 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:54.093 "is_configured": true, 00:15:54.093 "data_offset": 2048, 00:15:54.093 "data_size": 63488 00:15:54.093 }, 00:15:54.093 { 00:15:54.093 "name": "BaseBdev2", 00:15:54.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.093 "is_configured": false, 00:15:54.093 "data_offset": 0, 00:15:54.093 "data_size": 0 00:15:54.093 }, 00:15:54.093 { 00:15:54.093 "name": "BaseBdev3", 00:15:54.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.093 "is_configured": false, 00:15:54.093 "data_offset": 0, 00:15:54.093 "data_size": 0 00:15:54.093 }, 00:15:54.093 { 00:15:54.093 "name": "BaseBdev4", 00:15:54.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.093 "is_configured": false, 00:15:54.093 "data_offset": 0, 00:15:54.093 "data_size": 0 00:15:54.093 } 00:15:54.093 ] 00:15:54.093 }' 00:15:54.093 06:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.093 06:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.662 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.922 [2024-08-13 06:10:56.463144] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.922 [2024-08-13 06:10:56.463200] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:54.922 [2024-08-13 06:10:56.650888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.922 [2024-08-13 06:10:56.652593] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.922 [2024-08-13 06:10:56.652639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.922 [2024-08-13 06:10:56.652650] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.922 [2024-08-13 06:10:56.652673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.922 [2024-08-13 06:10:56.652684] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.922 [2024-08-13 06:10:56.652691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.922 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.181 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.181 "name": "Existed_Raid", 00:15:55.181 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:15:55.181 "strip_size_kb": 0, 00:15:55.181 "state": "configuring", 00:15:55.181 "raid_level": "raid1", 00:15:55.181 "superblock": true, 00:15:55.181 "num_base_bdevs": 4, 00:15:55.181 "num_base_bdevs_discovered": 1, 00:15:55.181 "num_base_bdevs_operational": 4, 00:15:55.181 "base_bdevs_list": [ 00:15:55.181 { 00:15:55.181 "name": "BaseBdev1", 00:15:55.182 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:55.182 "is_configured": true, 00:15:55.182 "data_offset": 2048, 00:15:55.182 "data_size": 63488 00:15:55.182 }, 00:15:55.182 { 00:15:55.182 "name": "BaseBdev2", 00:15:55.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.182 "is_configured": false, 00:15:55.182 "data_offset": 0, 00:15:55.182 "data_size": 0 00:15:55.182 }, 00:15:55.182 { 00:15:55.182 "name": "BaseBdev3", 00:15:55.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.182 "is_configured": false, 00:15:55.182 "data_offset": 0, 00:15:55.182 "data_size": 0 00:15:55.182 }, 00:15:55.182 { 00:15:55.182 "name": "BaseBdev4", 00:15:55.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.182 "is_configured": false, 00:15:55.182 "data_offset": 0, 00:15:55.182 "data_size": 0 00:15:55.182 } 00:15:55.182 ] 00:15:55.182 }' 00:15:55.182 06:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.182 06:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 06:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.009 [2024-08-13 06:10:57.578833] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.009 BaseBdev2 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:56.009 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.268 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.268 [ 00:15:56.268 { 00:15:56.268 "name": "BaseBdev2", 00:15:56.268 "aliases": [ 00:15:56.268 "a1995cc8-d81c-410f-aaaa-540e1e7de978" 00:15:56.268 ], 00:15:56.268 "product_name": "Malloc disk", 00:15:56.268 "block_size": 512, 00:15:56.268 "num_blocks": 65536, 00:15:56.268 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:15:56.268 "assigned_rate_limits": { 00:15:56.268 "rw_ios_per_sec": 0, 00:15:56.268 "rw_mbytes_per_sec": 0, 00:15:56.268 "r_mbytes_per_sec": 0, 00:15:56.268 "w_mbytes_per_sec": 0 00:15:56.268 }, 00:15:56.268 "claimed": true, 00:15:56.268 "claim_type": "exclusive_write", 00:15:56.268 "zoned": false, 00:15:56.268 "supported_io_types": { 00:15:56.268 "read": true, 00:15:56.268 "write": true, 00:15:56.268 "unmap": true, 00:15:56.268 "flush": true, 00:15:56.268 "reset": true, 00:15:56.268 "nvme_admin": false, 00:15:56.268 "nvme_io": false, 00:15:56.268 "nvme_io_md": false, 00:15:56.268 "write_zeroes": true, 00:15:56.268 "zcopy": true, 00:15:56.268 "get_zone_info": false, 00:15:56.268 "zone_management": false, 00:15:56.268 "zone_append": false, 00:15:56.268 "compare": false, 00:15:56.268 "compare_and_write": false, 00:15:56.268 "abort": true, 00:15:56.268 "seek_hole": false, 00:15:56.268 "seek_data": false, 00:15:56.268 "copy": true, 00:15:56.268 "nvme_iov_md": false 00:15:56.268 }, 00:15:56.268 "memory_domains": [ 00:15:56.268 { 00:15:56.268 "dma_device_id": "system", 00:15:56.268 "dma_device_type": 1 00:15:56.268 }, 00:15:56.268 { 00:15:56.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.268 "dma_device_type": 2 00:15:56.268 } 00:15:56.268 ], 00:15:56.268 "driver_specific": {} 00:15:56.268 } 00:15:56.268 ] 00:15:56.268 06:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.268 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.527 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.527 "name": "Existed_Raid", 00:15:56.527 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:15:56.527 "strip_size_kb": 0, 00:15:56.527 "state": "configuring", 00:15:56.527 "raid_level": "raid1", 00:15:56.527 "superblock": true, 00:15:56.527 "num_base_bdevs": 4, 00:15:56.527 "num_base_bdevs_discovered": 2, 00:15:56.527 "num_base_bdevs_operational": 4, 00:15:56.527 "base_bdevs_list": [ 00:15:56.527 { 00:15:56.527 "name": "BaseBdev1", 00:15:56.527 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:56.527 "is_configured": true, 00:15:56.527 "data_offset": 2048, 00:15:56.527 "data_size": 63488 00:15:56.527 }, 00:15:56.527 { 00:15:56.527 "name": "BaseBdev2", 00:15:56.527 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:15:56.527 "is_configured": true, 00:15:56.527 "data_offset": 2048, 00:15:56.527 "data_size": 63488 00:15:56.527 }, 00:15:56.527 { 00:15:56.527 "name": "BaseBdev3", 00:15:56.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.527 "is_configured": false, 00:15:56.527 "data_offset": 0, 00:15:56.527 "data_size": 0 00:15:56.527 }, 00:15:56.527 { 00:15:56.527 "name": "BaseBdev4", 00:15:56.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.527 "is_configured": false, 00:15:56.527 "data_offset": 0, 00:15:56.527 "data_size": 0 00:15:56.527 } 00:15:56.527 ] 00:15:56.527 }' 00:15:56.527 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.527 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.094 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:57.094 [2024-08-13 06:10:58.879856] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.094 BaseBdev3 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:57.352 06:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.352 06:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:57.611 [ 00:15:57.611 { 00:15:57.611 "name": "BaseBdev3", 00:15:57.611 "aliases": [ 00:15:57.611 "b9641a55-be8b-48fc-8033-067de0f8994a" 00:15:57.611 ], 00:15:57.611 "product_name": "Malloc disk", 00:15:57.611 "block_size": 512, 00:15:57.611 "num_blocks": 65536, 00:15:57.611 "uuid": "b9641a55-be8b-48fc-8033-067de0f8994a", 00:15:57.611 "assigned_rate_limits": { 00:15:57.611 "rw_ios_per_sec": 0, 00:15:57.611 "rw_mbytes_per_sec": 0, 00:15:57.611 "r_mbytes_per_sec": 0, 00:15:57.611 "w_mbytes_per_sec": 0 00:15:57.611 }, 00:15:57.611 "claimed": true, 00:15:57.611 "claim_type": "exclusive_write", 00:15:57.611 "zoned": false, 00:15:57.611 "supported_io_types": { 00:15:57.611 "read": true, 00:15:57.611 "write": true, 00:15:57.611 "unmap": true, 00:15:57.611 "flush": true, 00:15:57.611 "reset": true, 00:15:57.611 "nvme_admin": false, 00:15:57.611 "nvme_io": false, 00:15:57.611 "nvme_io_md": false, 00:15:57.611 "write_zeroes": true, 00:15:57.611 "zcopy": true, 00:15:57.611 "get_zone_info": false, 00:15:57.611 "zone_management": false, 00:15:57.611 "zone_append": false, 00:15:57.611 "compare": false, 00:15:57.611 "compare_and_write": false, 00:15:57.611 "abort": true, 00:15:57.611 "seek_hole": false, 00:15:57.611 "seek_data": false, 00:15:57.611 "copy": true, 00:15:57.611 "nvme_iov_md": false 00:15:57.611 }, 00:15:57.611 "memory_domains": [ 00:15:57.611 { 00:15:57.611 "dma_device_id": "system", 00:15:57.611 "dma_device_type": 1 00:15:57.611 }, 00:15:57.611 { 00:15:57.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.611 "dma_device_type": 2 00:15:57.611 } 00:15:57.611 ], 00:15:57.611 "driver_specific": {} 00:15:57.611 } 00:15:57.611 ] 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.611 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.870 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.870 "name": "Existed_Raid", 00:15:57.870 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:15:57.870 "strip_size_kb": 0, 00:15:57.870 "state": "configuring", 00:15:57.870 "raid_level": "raid1", 00:15:57.870 "superblock": true, 00:15:57.871 "num_base_bdevs": 4, 00:15:57.871 "num_base_bdevs_discovered": 3, 00:15:57.871 "num_base_bdevs_operational": 4, 00:15:57.871 "base_bdevs_list": [ 00:15:57.871 { 00:15:57.871 "name": "BaseBdev1", 00:15:57.871 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:57.871 "is_configured": true, 00:15:57.871 "data_offset": 2048, 00:15:57.871 "data_size": 63488 00:15:57.871 }, 00:15:57.871 { 00:15:57.871 "name": "BaseBdev2", 00:15:57.871 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:15:57.871 "is_configured": true, 00:15:57.871 "data_offset": 2048, 00:15:57.871 "data_size": 63488 00:15:57.871 }, 00:15:57.871 { 00:15:57.871 "name": "BaseBdev3", 00:15:57.871 "uuid": "b9641a55-be8b-48fc-8033-067de0f8994a", 00:15:57.871 "is_configured": true, 00:15:57.871 "data_offset": 2048, 00:15:57.871 "data_size": 63488 00:15:57.871 }, 00:15:57.871 { 00:15:57.871 "name": "BaseBdev4", 00:15:57.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.871 "is_configured": false, 00:15:57.871 "data_offset": 0, 00:15:57.871 "data_size": 0 00:15:57.871 } 00:15:57.871 ] 00:15:57.871 }' 00:15:57.871 06:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.871 06:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.439 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:58.439 [2024-08-13 06:11:00.200636] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.439 [2024-08-13 06:11:00.200818] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:58.439 [2024-08-13 06:11:00.200835] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:58.439 [2024-08-13 06:11:00.201090] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:58.439 [2024-08-13 06:11:00.201244] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:58.439 [2024-08-13 06:11:00.201254] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:58.439 [2024-08-13 06:11:00.201363] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.439 BaseBdev4 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.699 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:58.958 [ 00:15:58.958 { 00:15:58.958 "name": "BaseBdev4", 00:15:58.958 "aliases": [ 00:15:58.958 "a745fb46-f730-4d54-abbb-ffcbe41056df" 00:15:58.958 ], 00:15:58.958 "product_name": "Malloc disk", 00:15:58.958 "block_size": 512, 00:15:58.958 "num_blocks": 65536, 00:15:58.958 "uuid": "a745fb46-f730-4d54-abbb-ffcbe41056df", 00:15:58.958 "assigned_rate_limits": { 00:15:58.958 "rw_ios_per_sec": 0, 00:15:58.958 "rw_mbytes_per_sec": 0, 00:15:58.958 "r_mbytes_per_sec": 0, 00:15:58.958 "w_mbytes_per_sec": 0 00:15:58.958 }, 00:15:58.958 "claimed": true, 00:15:58.958 "claim_type": "exclusive_write", 00:15:58.958 "zoned": false, 00:15:58.958 "supported_io_types": { 00:15:58.958 "read": true, 00:15:58.958 "write": true, 00:15:58.958 "unmap": true, 00:15:58.958 "flush": true, 00:15:58.958 "reset": true, 00:15:58.958 "nvme_admin": false, 00:15:58.958 "nvme_io": false, 00:15:58.958 "nvme_io_md": false, 00:15:58.958 "write_zeroes": true, 00:15:58.958 "zcopy": true, 00:15:58.958 "get_zone_info": false, 00:15:58.958 "zone_management": false, 00:15:58.958 "zone_append": false, 00:15:58.958 "compare": false, 00:15:58.958 "compare_and_write": false, 00:15:58.958 "abort": true, 00:15:58.958 "seek_hole": false, 00:15:58.958 "seek_data": false, 00:15:58.958 "copy": true, 00:15:58.958 "nvme_iov_md": false 00:15:58.958 }, 00:15:58.958 "memory_domains": [ 00:15:58.959 { 00:15:58.959 "dma_device_id": "system", 00:15:58.959 "dma_device_type": 1 00:15:58.959 }, 00:15:58.959 { 00:15:58.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.959 "dma_device_type": 2 00:15:58.959 } 00:15:58.959 ], 00:15:58.959 "driver_specific": {} 00:15:58.959 } 00:15:58.959 ] 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.959 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.218 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.218 "name": "Existed_Raid", 00:15:59.218 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:15:59.218 "strip_size_kb": 0, 00:15:59.218 "state": "online", 00:15:59.218 "raid_level": "raid1", 00:15:59.218 "superblock": true, 00:15:59.218 "num_base_bdevs": 4, 00:15:59.218 "num_base_bdevs_discovered": 4, 00:15:59.218 "num_base_bdevs_operational": 4, 00:15:59.218 "base_bdevs_list": [ 00:15:59.218 { 00:15:59.218 "name": "BaseBdev1", 00:15:59.218 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:59.218 "is_configured": true, 00:15:59.218 "data_offset": 2048, 00:15:59.218 "data_size": 63488 00:15:59.218 }, 00:15:59.218 { 00:15:59.218 "name": "BaseBdev2", 00:15:59.218 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:15:59.218 "is_configured": true, 00:15:59.218 "data_offset": 2048, 00:15:59.218 "data_size": 63488 00:15:59.218 }, 00:15:59.218 { 00:15:59.218 "name": "BaseBdev3", 00:15:59.218 "uuid": "b9641a55-be8b-48fc-8033-067de0f8994a", 00:15:59.219 "is_configured": true, 00:15:59.219 "data_offset": 2048, 00:15:59.219 "data_size": 63488 00:15:59.219 }, 00:15:59.219 { 00:15:59.219 "name": "BaseBdev4", 00:15:59.219 "uuid": "a745fb46-f730-4d54-abbb-ffcbe41056df", 00:15:59.219 "is_configured": true, 00:15:59.219 "data_offset": 2048, 00:15:59.219 "data_size": 63488 00:15:59.219 } 00:15:59.219 ] 00:15:59.219 }' 00:15:59.219 06:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.219 06:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:59.789 [2024-08-13 06:11:01.510712] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.789 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:59.789 "name": "Existed_Raid", 00:15:59.789 "aliases": [ 00:15:59.789 "e9a419d0-1989-47b0-be58-2c1937ddaa56" 00:15:59.789 ], 00:15:59.789 "product_name": "Raid Volume", 00:15:59.789 "block_size": 512, 00:15:59.789 "num_blocks": 63488, 00:15:59.789 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:15:59.789 "assigned_rate_limits": { 00:15:59.789 "rw_ios_per_sec": 0, 00:15:59.789 "rw_mbytes_per_sec": 0, 00:15:59.789 "r_mbytes_per_sec": 0, 00:15:59.789 "w_mbytes_per_sec": 0 00:15:59.789 }, 00:15:59.789 "claimed": false, 00:15:59.789 "zoned": false, 00:15:59.789 "supported_io_types": { 00:15:59.789 "read": true, 00:15:59.789 "write": true, 00:15:59.789 "unmap": false, 00:15:59.789 "flush": false, 00:15:59.789 "reset": true, 00:15:59.789 "nvme_admin": false, 00:15:59.789 "nvme_io": false, 00:15:59.789 "nvme_io_md": false, 00:15:59.789 "write_zeroes": true, 00:15:59.789 "zcopy": false, 00:15:59.789 "get_zone_info": false, 00:15:59.789 "zone_management": false, 00:15:59.789 "zone_append": false, 00:15:59.789 "compare": false, 00:15:59.789 "compare_and_write": false, 00:15:59.789 "abort": false, 00:15:59.789 "seek_hole": false, 00:15:59.789 "seek_data": false, 00:15:59.789 "copy": false, 00:15:59.789 "nvme_iov_md": false 00:15:59.789 }, 00:15:59.789 "memory_domains": [ 00:15:59.789 { 00:15:59.789 "dma_device_id": "system", 00:15:59.789 "dma_device_type": 1 00:15:59.789 }, 00:15:59.789 { 00:15:59.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.789 "dma_device_type": 2 00:15:59.789 }, 00:15:59.789 { 00:15:59.789 "dma_device_id": "system", 00:15:59.789 "dma_device_type": 1 00:15:59.789 }, 00:15:59.789 { 00:15:59.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.789 "dma_device_type": 2 00:15:59.789 }, 00:15:59.789 { 00:15:59.789 "dma_device_id": "system", 00:15:59.789 "dma_device_type": 1 00:15:59.789 }, 00:15:59.789 { 00:15:59.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.789 "dma_device_type": 2 00:15:59.789 }, 00:15:59.789 { 00:15:59.790 "dma_device_id": "system", 00:15:59.790 "dma_device_type": 1 00:15:59.790 }, 00:15:59.790 { 00:15:59.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.790 "dma_device_type": 2 00:15:59.790 } 00:15:59.790 ], 00:15:59.790 "driver_specific": { 00:15:59.790 "raid": { 00:15:59.790 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:15:59.790 "strip_size_kb": 0, 00:15:59.790 "state": "online", 00:15:59.790 "raid_level": "raid1", 00:15:59.790 "superblock": true, 00:15:59.790 "num_base_bdevs": 4, 00:15:59.790 "num_base_bdevs_discovered": 4, 00:15:59.790 "num_base_bdevs_operational": 4, 00:15:59.790 "base_bdevs_list": [ 00:15:59.790 { 00:15:59.790 "name": "BaseBdev1", 00:15:59.790 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:15:59.790 "is_configured": true, 00:15:59.790 "data_offset": 2048, 00:15:59.790 "data_size": 63488 00:15:59.790 }, 00:15:59.790 { 00:15:59.790 "name": "BaseBdev2", 00:15:59.790 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:15:59.790 "is_configured": true, 00:15:59.790 "data_offset": 2048, 00:15:59.790 "data_size": 63488 00:15:59.790 }, 00:15:59.790 { 00:15:59.790 "name": "BaseBdev3", 00:15:59.790 "uuid": "b9641a55-be8b-48fc-8033-067de0f8994a", 00:15:59.790 "is_configured": true, 00:15:59.790 "data_offset": 2048, 00:15:59.790 "data_size": 63488 00:15:59.790 }, 00:15:59.790 { 00:15:59.790 "name": "BaseBdev4", 00:15:59.790 "uuid": "a745fb46-f730-4d54-abbb-ffcbe41056df", 00:15:59.790 "is_configured": true, 00:15:59.790 "data_offset": 2048, 00:15:59.790 "data_size": 63488 00:15:59.790 } 00:15:59.790 ] 00:15:59.790 } 00:15:59.790 } 00:15:59.790 }' 00:15:59.790 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.050 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:00.050 BaseBdev2 00:16:00.050 BaseBdev3 00:16:00.050 BaseBdev4' 00:16:00.050 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:00.050 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:00.050 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:00.050 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:00.050 "name": "BaseBdev1", 00:16:00.050 "aliases": [ 00:16:00.050 "4ca0e405-b4da-4a73-aea1-2d4081a07648" 00:16:00.050 ], 00:16:00.050 "product_name": "Malloc disk", 00:16:00.050 "block_size": 512, 00:16:00.050 "num_blocks": 65536, 00:16:00.050 "uuid": "4ca0e405-b4da-4a73-aea1-2d4081a07648", 00:16:00.050 "assigned_rate_limits": { 00:16:00.050 "rw_ios_per_sec": 0, 00:16:00.050 "rw_mbytes_per_sec": 0, 00:16:00.050 "r_mbytes_per_sec": 0, 00:16:00.050 "w_mbytes_per_sec": 0 00:16:00.050 }, 00:16:00.050 "claimed": true, 00:16:00.050 "claim_type": "exclusive_write", 00:16:00.050 "zoned": false, 00:16:00.050 "supported_io_types": { 00:16:00.051 "read": true, 00:16:00.051 "write": true, 00:16:00.051 "unmap": true, 00:16:00.051 "flush": true, 00:16:00.051 "reset": true, 00:16:00.051 "nvme_admin": false, 00:16:00.051 "nvme_io": false, 00:16:00.051 "nvme_io_md": false, 00:16:00.051 "write_zeroes": true, 00:16:00.051 "zcopy": true, 00:16:00.051 "get_zone_info": false, 00:16:00.051 "zone_management": false, 00:16:00.051 "zone_append": false, 00:16:00.051 "compare": false, 00:16:00.051 "compare_and_write": false, 00:16:00.051 "abort": true, 00:16:00.051 "seek_hole": false, 00:16:00.051 "seek_data": false, 00:16:00.051 "copy": true, 00:16:00.051 "nvme_iov_md": false 00:16:00.051 }, 00:16:00.051 "memory_domains": [ 00:16:00.051 { 00:16:00.051 "dma_device_id": "system", 00:16:00.051 "dma_device_type": 1 00:16:00.051 }, 00:16:00.051 { 00:16:00.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.051 "dma_device_type": 2 00:16:00.051 } 00:16:00.051 ], 00:16:00.051 "driver_specific": {} 00:16:00.051 }' 00:16:00.051 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.051 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.311 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.311 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.311 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.311 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.311 06:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.311 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.311 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.311 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.311 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.571 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:00.571 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:00.571 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:00.571 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:00.571 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:00.571 "name": "BaseBdev2", 00:16:00.571 "aliases": [ 00:16:00.571 "a1995cc8-d81c-410f-aaaa-540e1e7de978" 00:16:00.571 ], 00:16:00.571 "product_name": "Malloc disk", 00:16:00.571 "block_size": 512, 00:16:00.571 "num_blocks": 65536, 00:16:00.571 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:16:00.571 "assigned_rate_limits": { 00:16:00.571 "rw_ios_per_sec": 0, 00:16:00.571 "rw_mbytes_per_sec": 0, 00:16:00.571 "r_mbytes_per_sec": 0, 00:16:00.571 "w_mbytes_per_sec": 0 00:16:00.571 }, 00:16:00.571 "claimed": true, 00:16:00.571 "claim_type": "exclusive_write", 00:16:00.571 "zoned": false, 00:16:00.571 "supported_io_types": { 00:16:00.571 "read": true, 00:16:00.571 "write": true, 00:16:00.571 "unmap": true, 00:16:00.571 "flush": true, 00:16:00.571 "reset": true, 00:16:00.571 "nvme_admin": false, 00:16:00.571 "nvme_io": false, 00:16:00.571 "nvme_io_md": false, 00:16:00.571 "write_zeroes": true, 00:16:00.571 "zcopy": true, 00:16:00.571 "get_zone_info": false, 00:16:00.571 "zone_management": false, 00:16:00.571 "zone_append": false, 00:16:00.571 "compare": false, 00:16:00.571 "compare_and_write": false, 00:16:00.571 "abort": true, 00:16:00.571 "seek_hole": false, 00:16:00.572 "seek_data": false, 00:16:00.572 "copy": true, 00:16:00.572 "nvme_iov_md": false 00:16:00.572 }, 00:16:00.572 "memory_domains": [ 00:16:00.572 { 00:16:00.572 "dma_device_id": "system", 00:16:00.572 "dma_device_type": 1 00:16:00.572 }, 00:16:00.572 { 00:16:00.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.572 "dma_device_type": 2 00:16:00.572 } 00:16:00.572 ], 00:16:00.572 "driver_specific": {} 00:16:00.572 }' 00:16:00.572 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.572 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.832 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.092 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:01.092 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:01.092 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:01.092 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:01.092 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:01.092 "name": "BaseBdev3", 00:16:01.092 "aliases": [ 00:16:01.092 "b9641a55-be8b-48fc-8033-067de0f8994a" 00:16:01.092 ], 00:16:01.092 "product_name": "Malloc disk", 00:16:01.092 "block_size": 512, 00:16:01.092 "num_blocks": 65536, 00:16:01.092 "uuid": "b9641a55-be8b-48fc-8033-067de0f8994a", 00:16:01.092 "assigned_rate_limits": { 00:16:01.092 "rw_ios_per_sec": 0, 00:16:01.092 "rw_mbytes_per_sec": 0, 00:16:01.092 "r_mbytes_per_sec": 0, 00:16:01.092 "w_mbytes_per_sec": 0 00:16:01.092 }, 00:16:01.092 "claimed": true, 00:16:01.092 "claim_type": "exclusive_write", 00:16:01.092 "zoned": false, 00:16:01.092 "supported_io_types": { 00:16:01.092 "read": true, 00:16:01.092 "write": true, 00:16:01.092 "unmap": true, 00:16:01.092 "flush": true, 00:16:01.092 "reset": true, 00:16:01.092 "nvme_admin": false, 00:16:01.092 "nvme_io": false, 00:16:01.092 "nvme_io_md": false, 00:16:01.092 "write_zeroes": true, 00:16:01.092 "zcopy": true, 00:16:01.092 "get_zone_info": false, 00:16:01.092 "zone_management": false, 00:16:01.092 "zone_append": false, 00:16:01.092 "compare": false, 00:16:01.092 "compare_and_write": false, 00:16:01.092 "abort": true, 00:16:01.092 "seek_hole": false, 00:16:01.092 "seek_data": false, 00:16:01.092 "copy": true, 00:16:01.092 "nvme_iov_md": false 00:16:01.092 }, 00:16:01.092 "memory_domains": [ 00:16:01.092 { 00:16:01.092 "dma_device_id": "system", 00:16:01.092 "dma_device_type": 1 00:16:01.092 }, 00:16:01.092 { 00:16:01.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.092 "dma_device_type": 2 00:16:01.092 } 00:16:01.092 ], 00:16:01.092 "driver_specific": {} 00:16:01.092 }' 00:16:01.092 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.352 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.352 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:01.352 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.352 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.352 06:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:01.352 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.352 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.352 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:01.352 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.352 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.612 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:01.612 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:01.613 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:01.613 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:01.613 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:01.613 "name": "BaseBdev4", 00:16:01.613 "aliases": [ 00:16:01.613 "a745fb46-f730-4d54-abbb-ffcbe41056df" 00:16:01.613 ], 00:16:01.613 "product_name": "Malloc disk", 00:16:01.613 "block_size": 512, 00:16:01.613 "num_blocks": 65536, 00:16:01.613 "uuid": "a745fb46-f730-4d54-abbb-ffcbe41056df", 00:16:01.613 "assigned_rate_limits": { 00:16:01.613 "rw_ios_per_sec": 0, 00:16:01.613 "rw_mbytes_per_sec": 0, 00:16:01.613 "r_mbytes_per_sec": 0, 00:16:01.613 "w_mbytes_per_sec": 0 00:16:01.613 }, 00:16:01.613 "claimed": true, 00:16:01.613 "claim_type": "exclusive_write", 00:16:01.613 "zoned": false, 00:16:01.613 "supported_io_types": { 00:16:01.613 "read": true, 00:16:01.613 "write": true, 00:16:01.613 "unmap": true, 00:16:01.613 "flush": true, 00:16:01.613 "reset": true, 00:16:01.613 "nvme_admin": false, 00:16:01.613 "nvme_io": false, 00:16:01.613 "nvme_io_md": false, 00:16:01.613 "write_zeroes": true, 00:16:01.613 "zcopy": true, 00:16:01.613 "get_zone_info": false, 00:16:01.613 "zone_management": false, 00:16:01.613 "zone_append": false, 00:16:01.613 "compare": false, 00:16:01.613 "compare_and_write": false, 00:16:01.613 "abort": true, 00:16:01.613 "seek_hole": false, 00:16:01.613 "seek_data": false, 00:16:01.613 "copy": true, 00:16:01.613 "nvme_iov_md": false 00:16:01.613 }, 00:16:01.613 "memory_domains": [ 00:16:01.613 { 00:16:01.613 "dma_device_id": "system", 00:16:01.613 "dma_device_type": 1 00:16:01.613 }, 00:16:01.613 { 00:16:01.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.613 "dma_device_type": 2 00:16:01.613 } 00:16:01.613 ], 00:16:01.613 "driver_specific": {} 00:16:01.613 }' 00:16:01.613 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.613 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.873 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:02.134 [2024-08-13 06:11:03.870599] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.134 06:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.393 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.393 "name": "Existed_Raid", 00:16:02.393 "uuid": "e9a419d0-1989-47b0-be58-2c1937ddaa56", 00:16:02.393 "strip_size_kb": 0, 00:16:02.393 "state": "online", 00:16:02.393 "raid_level": "raid1", 00:16:02.393 "superblock": true, 00:16:02.393 "num_base_bdevs": 4, 00:16:02.393 "num_base_bdevs_discovered": 3, 00:16:02.393 "num_base_bdevs_operational": 3, 00:16:02.393 "base_bdevs_list": [ 00:16:02.393 { 00:16:02.393 "name": null, 00:16:02.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.393 "is_configured": false, 00:16:02.393 "data_offset": 2048, 00:16:02.393 "data_size": 63488 00:16:02.393 }, 00:16:02.393 { 00:16:02.393 "name": "BaseBdev2", 00:16:02.393 "uuid": "a1995cc8-d81c-410f-aaaa-540e1e7de978", 00:16:02.393 "is_configured": true, 00:16:02.393 "data_offset": 2048, 00:16:02.393 "data_size": 63488 00:16:02.393 }, 00:16:02.393 { 00:16:02.393 "name": "BaseBdev3", 00:16:02.393 "uuid": "b9641a55-be8b-48fc-8033-067de0f8994a", 00:16:02.393 "is_configured": true, 00:16:02.393 "data_offset": 2048, 00:16:02.393 "data_size": 63488 00:16:02.393 }, 00:16:02.393 { 00:16:02.393 "name": "BaseBdev4", 00:16:02.393 "uuid": "a745fb46-f730-4d54-abbb-ffcbe41056df", 00:16:02.393 "is_configured": true, 00:16:02.393 "data_offset": 2048, 00:16:02.393 "data_size": 63488 00:16:02.393 } 00:16:02.393 ] 00:16:02.393 }' 00:16:02.393 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.393 06:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.963 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:02.963 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:02.963 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.963 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:03.224 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:03.224 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:03.224 06:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:03.224 [2024-08-13 06:11:05.003983] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:03.483 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:03.483 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:03.484 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.484 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:03.484 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:03.484 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:03.484 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:03.744 [2024-08-13 06:11:05.422478] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:03.744 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:03.744 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:03.744 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.744 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:04.003 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:04.003 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.003 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:04.263 [2024-08-13 06:11:05.848605] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:04.263 [2024-08-13 06:11:05.848772] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.263 [2024-08-13 06:11:05.859634] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.263 [2024-08-13 06:11:05.859718] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.263 [2024-08-13 06:11:05.859755] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:04.263 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:04.263 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.263 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.263 06:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:04.523 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:04.523 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:04.523 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:04.523 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:04.523 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:04.523 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.523 BaseBdev2 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.785 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.051 [ 00:16:05.051 { 00:16:05.051 "name": "BaseBdev2", 00:16:05.051 "aliases": [ 00:16:05.051 "2e5e10d7-6605-4814-91e8-ccaec1a2f364" 00:16:05.051 ], 00:16:05.051 "product_name": "Malloc disk", 00:16:05.051 "block_size": 512, 00:16:05.051 "num_blocks": 65536, 00:16:05.051 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:05.051 "assigned_rate_limits": { 00:16:05.051 "rw_ios_per_sec": 0, 00:16:05.051 "rw_mbytes_per_sec": 0, 00:16:05.051 "r_mbytes_per_sec": 0, 00:16:05.051 "w_mbytes_per_sec": 0 00:16:05.051 }, 00:16:05.051 "claimed": false, 00:16:05.051 "zoned": false, 00:16:05.051 "supported_io_types": { 00:16:05.051 "read": true, 00:16:05.051 "write": true, 00:16:05.051 "unmap": true, 00:16:05.051 "flush": true, 00:16:05.051 "reset": true, 00:16:05.051 "nvme_admin": false, 00:16:05.051 "nvme_io": false, 00:16:05.051 "nvme_io_md": false, 00:16:05.051 "write_zeroes": true, 00:16:05.051 "zcopy": true, 00:16:05.051 "get_zone_info": false, 00:16:05.051 "zone_management": false, 00:16:05.051 "zone_append": false, 00:16:05.051 "compare": false, 00:16:05.051 "compare_and_write": false, 00:16:05.051 "abort": true, 00:16:05.051 "seek_hole": false, 00:16:05.051 "seek_data": false, 00:16:05.051 "copy": true, 00:16:05.051 "nvme_iov_md": false 00:16:05.051 }, 00:16:05.051 "memory_domains": [ 00:16:05.051 { 00:16:05.051 "dma_device_id": "system", 00:16:05.051 "dma_device_type": 1 00:16:05.051 }, 00:16:05.051 { 00:16:05.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.051 "dma_device_type": 2 00:16:05.051 } 00:16:05.051 ], 00:16:05.051 "driver_specific": {} 00:16:05.051 } 00:16:05.051 ] 00:16:05.051 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:05.051 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:05.051 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:05.051 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.333 BaseBdev3 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:05.333 06:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.333 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.608 [ 00:16:05.608 { 00:16:05.608 "name": "BaseBdev3", 00:16:05.608 "aliases": [ 00:16:05.608 "bc840d97-0299-4ef1-ac03-ead01f12f81c" 00:16:05.608 ], 00:16:05.608 "product_name": "Malloc disk", 00:16:05.608 "block_size": 512, 00:16:05.608 "num_blocks": 65536, 00:16:05.608 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:05.608 "assigned_rate_limits": { 00:16:05.608 "rw_ios_per_sec": 0, 00:16:05.608 "rw_mbytes_per_sec": 0, 00:16:05.608 "r_mbytes_per_sec": 0, 00:16:05.608 "w_mbytes_per_sec": 0 00:16:05.608 }, 00:16:05.608 "claimed": false, 00:16:05.608 "zoned": false, 00:16:05.608 "supported_io_types": { 00:16:05.608 "read": true, 00:16:05.608 "write": true, 00:16:05.608 "unmap": true, 00:16:05.608 "flush": true, 00:16:05.608 "reset": true, 00:16:05.608 "nvme_admin": false, 00:16:05.608 "nvme_io": false, 00:16:05.608 "nvme_io_md": false, 00:16:05.608 "write_zeroes": true, 00:16:05.608 "zcopy": true, 00:16:05.608 "get_zone_info": false, 00:16:05.608 "zone_management": false, 00:16:05.608 "zone_append": false, 00:16:05.608 "compare": false, 00:16:05.608 "compare_and_write": false, 00:16:05.608 "abort": true, 00:16:05.608 "seek_hole": false, 00:16:05.608 "seek_data": false, 00:16:05.608 "copy": true, 00:16:05.608 "nvme_iov_md": false 00:16:05.608 }, 00:16:05.608 "memory_domains": [ 00:16:05.608 { 00:16:05.608 "dma_device_id": "system", 00:16:05.608 "dma_device_type": 1 00:16:05.608 }, 00:16:05.608 { 00:16:05.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.608 "dma_device_type": 2 00:16:05.608 } 00:16:05.608 ], 00:16:05.608 "driver_specific": {} 00:16:05.608 } 00:16:05.608 ] 00:16:05.608 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:05.608 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:05.608 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:05.609 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:05.868 BaseBdev4 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:05.868 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.128 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.128 [ 00:16:06.128 { 00:16:06.128 "name": "BaseBdev4", 00:16:06.128 "aliases": [ 00:16:06.128 "f026fd3d-aed7-4761-a192-4a72db305392" 00:16:06.128 ], 00:16:06.128 "product_name": "Malloc disk", 00:16:06.128 "block_size": 512, 00:16:06.128 "num_blocks": 65536, 00:16:06.128 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:06.128 "assigned_rate_limits": { 00:16:06.128 "rw_ios_per_sec": 0, 00:16:06.128 "rw_mbytes_per_sec": 0, 00:16:06.128 "r_mbytes_per_sec": 0, 00:16:06.128 "w_mbytes_per_sec": 0 00:16:06.128 }, 00:16:06.128 "claimed": false, 00:16:06.128 "zoned": false, 00:16:06.128 "supported_io_types": { 00:16:06.128 "read": true, 00:16:06.128 "write": true, 00:16:06.128 "unmap": true, 00:16:06.128 "flush": true, 00:16:06.128 "reset": true, 00:16:06.128 "nvme_admin": false, 00:16:06.128 "nvme_io": false, 00:16:06.128 "nvme_io_md": false, 00:16:06.128 "write_zeroes": true, 00:16:06.128 "zcopy": true, 00:16:06.128 "get_zone_info": false, 00:16:06.128 "zone_management": false, 00:16:06.128 "zone_append": false, 00:16:06.128 "compare": false, 00:16:06.128 "compare_and_write": false, 00:16:06.128 "abort": true, 00:16:06.128 "seek_hole": false, 00:16:06.128 "seek_data": false, 00:16:06.128 "copy": true, 00:16:06.128 "nvme_iov_md": false 00:16:06.128 }, 00:16:06.128 "memory_domains": [ 00:16:06.128 { 00:16:06.128 "dma_device_id": "system", 00:16:06.128 "dma_device_type": 1 00:16:06.128 }, 00:16:06.128 { 00:16:06.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.128 "dma_device_type": 2 00:16:06.128 } 00:16:06.128 ], 00:16:06.128 "driver_specific": {} 00:16:06.128 } 00:16:06.128 ] 00:16:06.128 06:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:06.128 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:06.128 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:06.128 06:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:06.388 [2024-08-13 06:11:08.069170] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.388 [2024-08-13 06:11:08.069281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.388 [2024-08-13 06:11:08.069319] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.388 [2024-08-13 06:11:08.071074] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.388 [2024-08-13 06:11:08.071168] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.388 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.647 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.647 "name": "Existed_Raid", 00:16:06.647 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:06.647 "strip_size_kb": 0, 00:16:06.647 "state": "configuring", 00:16:06.647 "raid_level": "raid1", 00:16:06.647 "superblock": true, 00:16:06.647 "num_base_bdevs": 4, 00:16:06.647 "num_base_bdevs_discovered": 3, 00:16:06.647 "num_base_bdevs_operational": 4, 00:16:06.647 "base_bdevs_list": [ 00:16:06.647 { 00:16:06.647 "name": "BaseBdev1", 00:16:06.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.647 "is_configured": false, 00:16:06.647 "data_offset": 0, 00:16:06.647 "data_size": 0 00:16:06.647 }, 00:16:06.647 { 00:16:06.647 "name": "BaseBdev2", 00:16:06.647 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:06.647 "is_configured": true, 00:16:06.647 "data_offset": 2048, 00:16:06.647 "data_size": 63488 00:16:06.647 }, 00:16:06.647 { 00:16:06.647 "name": "BaseBdev3", 00:16:06.647 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:06.647 "is_configured": true, 00:16:06.647 "data_offset": 2048, 00:16:06.647 "data_size": 63488 00:16:06.647 }, 00:16:06.647 { 00:16:06.647 "name": "BaseBdev4", 00:16:06.647 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:06.647 "is_configured": true, 00:16:06.647 "data_offset": 2048, 00:16:06.647 "data_size": 63488 00:16:06.647 } 00:16:06.647 ] 00:16:06.647 }' 00:16:06.647 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.647 06:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.216 06:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:07.476 [2024-08-13 06:11:09.011555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.476 "name": "Existed_Raid", 00:16:07.476 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:07.476 "strip_size_kb": 0, 00:16:07.476 "state": "configuring", 00:16:07.476 "raid_level": "raid1", 00:16:07.476 "superblock": true, 00:16:07.476 "num_base_bdevs": 4, 00:16:07.476 "num_base_bdevs_discovered": 2, 00:16:07.476 "num_base_bdevs_operational": 4, 00:16:07.476 "base_bdevs_list": [ 00:16:07.476 { 00:16:07.476 "name": "BaseBdev1", 00:16:07.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.476 "is_configured": false, 00:16:07.476 "data_offset": 0, 00:16:07.476 "data_size": 0 00:16:07.476 }, 00:16:07.476 { 00:16:07.476 "name": null, 00:16:07.476 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:07.476 "is_configured": false, 00:16:07.476 "data_offset": 2048, 00:16:07.476 "data_size": 63488 00:16:07.476 }, 00:16:07.476 { 00:16:07.476 "name": "BaseBdev3", 00:16:07.476 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:07.476 "is_configured": true, 00:16:07.476 "data_offset": 2048, 00:16:07.476 "data_size": 63488 00:16:07.476 }, 00:16:07.476 { 00:16:07.476 "name": "BaseBdev4", 00:16:07.476 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:07.476 "is_configured": true, 00:16:07.476 "data_offset": 2048, 00:16:07.476 "data_size": 63488 00:16:07.476 } 00:16:07.476 ] 00:16:07.476 }' 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.476 06:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.045 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.045 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:08.304 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:08.304 06:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.564 [2024-08-13 06:11:10.168532] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.564 BaseBdev1 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:08.564 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.824 [ 00:16:08.824 { 00:16:08.824 "name": "BaseBdev1", 00:16:08.824 "aliases": [ 00:16:08.824 "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf" 00:16:08.824 ], 00:16:08.824 "product_name": "Malloc disk", 00:16:08.824 "block_size": 512, 00:16:08.824 "num_blocks": 65536, 00:16:08.824 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:08.824 "assigned_rate_limits": { 00:16:08.824 "rw_ios_per_sec": 0, 00:16:08.824 "rw_mbytes_per_sec": 0, 00:16:08.824 "r_mbytes_per_sec": 0, 00:16:08.824 "w_mbytes_per_sec": 0 00:16:08.824 }, 00:16:08.824 "claimed": true, 00:16:08.824 "claim_type": "exclusive_write", 00:16:08.824 "zoned": false, 00:16:08.824 "supported_io_types": { 00:16:08.824 "read": true, 00:16:08.824 "write": true, 00:16:08.824 "unmap": true, 00:16:08.824 "flush": true, 00:16:08.824 "reset": true, 00:16:08.824 "nvme_admin": false, 00:16:08.824 "nvme_io": false, 00:16:08.824 "nvme_io_md": false, 00:16:08.824 "write_zeroes": true, 00:16:08.824 "zcopy": true, 00:16:08.824 "get_zone_info": false, 00:16:08.824 "zone_management": false, 00:16:08.824 "zone_append": false, 00:16:08.824 "compare": false, 00:16:08.824 "compare_and_write": false, 00:16:08.824 "abort": true, 00:16:08.824 "seek_hole": false, 00:16:08.824 "seek_data": false, 00:16:08.824 "copy": true, 00:16:08.824 "nvme_iov_md": false 00:16:08.824 }, 00:16:08.824 "memory_domains": [ 00:16:08.824 { 00:16:08.824 "dma_device_id": "system", 00:16:08.824 "dma_device_type": 1 00:16:08.824 }, 00:16:08.824 { 00:16:08.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.824 "dma_device_type": 2 00:16:08.824 } 00:16:08.824 ], 00:16:08.824 "driver_specific": {} 00:16:08.824 } 00:16:08.824 ] 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.824 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.084 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.084 "name": "Existed_Raid", 00:16:09.084 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:09.084 "strip_size_kb": 0, 00:16:09.084 "state": "configuring", 00:16:09.084 "raid_level": "raid1", 00:16:09.084 "superblock": true, 00:16:09.084 "num_base_bdevs": 4, 00:16:09.084 "num_base_bdevs_discovered": 3, 00:16:09.084 "num_base_bdevs_operational": 4, 00:16:09.084 "base_bdevs_list": [ 00:16:09.084 { 00:16:09.084 "name": "BaseBdev1", 00:16:09.084 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:09.084 "is_configured": true, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 }, 00:16:09.084 { 00:16:09.084 "name": null, 00:16:09.084 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:09.084 "is_configured": false, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 }, 00:16:09.084 { 00:16:09.084 "name": "BaseBdev3", 00:16:09.084 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:09.084 "is_configured": true, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 }, 00:16:09.084 { 00:16:09.084 "name": "BaseBdev4", 00:16:09.084 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:09.084 "is_configured": true, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 } 00:16:09.084 ] 00:16:09.084 }' 00:16:09.084 06:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.084 06:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.654 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.654 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.913 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:09.913 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:10.173 [2024-08-13 06:11:11.741904] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.173 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.433 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.433 "name": "Existed_Raid", 00:16:10.433 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:10.433 "strip_size_kb": 0, 00:16:10.433 "state": "configuring", 00:16:10.433 "raid_level": "raid1", 00:16:10.433 "superblock": true, 00:16:10.433 "num_base_bdevs": 4, 00:16:10.433 "num_base_bdevs_discovered": 2, 00:16:10.433 "num_base_bdevs_operational": 4, 00:16:10.433 "base_bdevs_list": [ 00:16:10.433 { 00:16:10.433 "name": "BaseBdev1", 00:16:10.433 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:10.433 "is_configured": true, 00:16:10.433 "data_offset": 2048, 00:16:10.433 "data_size": 63488 00:16:10.433 }, 00:16:10.433 { 00:16:10.433 "name": null, 00:16:10.433 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:10.433 "is_configured": false, 00:16:10.433 "data_offset": 2048, 00:16:10.433 "data_size": 63488 00:16:10.433 }, 00:16:10.433 { 00:16:10.433 "name": null, 00:16:10.433 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:10.433 "is_configured": false, 00:16:10.433 "data_offset": 2048, 00:16:10.433 "data_size": 63488 00:16:10.433 }, 00:16:10.433 { 00:16:10.433 "name": "BaseBdev4", 00:16:10.433 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:10.433 "is_configured": true, 00:16:10.433 "data_offset": 2048, 00:16:10.433 "data_size": 63488 00:16:10.433 } 00:16:10.433 ] 00:16:10.433 }' 00:16:10.433 06:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.433 06:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.002 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.002 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.002 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:11.002 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:11.262 [2024-08-13 06:11:12.872087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.262 06:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.522 06:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.522 "name": "Existed_Raid", 00:16:11.522 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:11.522 "strip_size_kb": 0, 00:16:11.522 "state": "configuring", 00:16:11.522 "raid_level": "raid1", 00:16:11.522 "superblock": true, 00:16:11.522 "num_base_bdevs": 4, 00:16:11.522 "num_base_bdevs_discovered": 3, 00:16:11.522 "num_base_bdevs_operational": 4, 00:16:11.522 "base_bdevs_list": [ 00:16:11.522 { 00:16:11.522 "name": "BaseBdev1", 00:16:11.522 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:11.522 "is_configured": true, 00:16:11.522 "data_offset": 2048, 00:16:11.522 "data_size": 63488 00:16:11.522 }, 00:16:11.522 { 00:16:11.522 "name": null, 00:16:11.522 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:11.522 "is_configured": false, 00:16:11.522 "data_offset": 2048, 00:16:11.522 "data_size": 63488 00:16:11.522 }, 00:16:11.522 { 00:16:11.522 "name": "BaseBdev3", 00:16:11.522 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:11.522 "is_configured": true, 00:16:11.522 "data_offset": 2048, 00:16:11.522 "data_size": 63488 00:16:11.522 }, 00:16:11.522 { 00:16:11.522 "name": "BaseBdev4", 00:16:11.522 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:11.522 "is_configured": true, 00:16:11.522 "data_offset": 2048, 00:16:11.522 "data_size": 63488 00:16:11.522 } 00:16:11.522 ] 00:16:11.522 }' 00:16:11.522 06:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.522 06:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.096 06:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.096 06:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.096 06:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:12.096 06:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:12.356 [2024-08-13 06:11:14.014155] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.356 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.616 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.616 "name": "Existed_Raid", 00:16:12.616 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:12.616 "strip_size_kb": 0, 00:16:12.616 "state": "configuring", 00:16:12.616 "raid_level": "raid1", 00:16:12.616 "superblock": true, 00:16:12.616 "num_base_bdevs": 4, 00:16:12.616 "num_base_bdevs_discovered": 2, 00:16:12.616 "num_base_bdevs_operational": 4, 00:16:12.616 "base_bdevs_list": [ 00:16:12.616 { 00:16:12.616 "name": null, 00:16:12.616 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:12.616 "is_configured": false, 00:16:12.616 "data_offset": 2048, 00:16:12.616 "data_size": 63488 00:16:12.616 }, 00:16:12.616 { 00:16:12.616 "name": null, 00:16:12.616 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:12.616 "is_configured": false, 00:16:12.616 "data_offset": 2048, 00:16:12.616 "data_size": 63488 00:16:12.616 }, 00:16:12.616 { 00:16:12.616 "name": "BaseBdev3", 00:16:12.616 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:12.616 "is_configured": true, 00:16:12.616 "data_offset": 2048, 00:16:12.616 "data_size": 63488 00:16:12.616 }, 00:16:12.616 { 00:16:12.616 "name": "BaseBdev4", 00:16:12.616 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:12.616 "is_configured": true, 00:16:12.616 "data_offset": 2048, 00:16:12.616 "data_size": 63488 00:16:12.616 } 00:16:12.616 ] 00:16:12.616 }' 00:16:12.616 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.616 06:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.185 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.185 06:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:13.444 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:13.445 [2024-08-13 06:11:15.174528] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.445 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.704 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.704 "name": "Existed_Raid", 00:16:13.704 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:13.704 "strip_size_kb": 0, 00:16:13.704 "state": "configuring", 00:16:13.704 "raid_level": "raid1", 00:16:13.704 "superblock": true, 00:16:13.704 "num_base_bdevs": 4, 00:16:13.704 "num_base_bdevs_discovered": 3, 00:16:13.704 "num_base_bdevs_operational": 4, 00:16:13.704 "base_bdevs_list": [ 00:16:13.704 { 00:16:13.704 "name": null, 00:16:13.704 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:13.704 "is_configured": false, 00:16:13.704 "data_offset": 2048, 00:16:13.704 "data_size": 63488 00:16:13.704 }, 00:16:13.704 { 00:16:13.704 "name": "BaseBdev2", 00:16:13.704 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:13.704 "is_configured": true, 00:16:13.704 "data_offset": 2048, 00:16:13.704 "data_size": 63488 00:16:13.704 }, 00:16:13.704 { 00:16:13.704 "name": "BaseBdev3", 00:16:13.704 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:13.704 "is_configured": true, 00:16:13.704 "data_offset": 2048, 00:16:13.704 "data_size": 63488 00:16:13.704 }, 00:16:13.704 { 00:16:13.704 "name": "BaseBdev4", 00:16:13.704 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:13.704 "is_configured": true, 00:16:13.704 "data_offset": 2048, 00:16:13.704 "data_size": 63488 00:16:13.704 } 00:16:13.704 ] 00:16:13.704 }' 00:16:13.704 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.704 06:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.274 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.274 06:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:14.533 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:14.533 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:14.533 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.792 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf 00:16:14.792 [2024-08-13 06:11:16.519276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:14.792 [2024-08-13 06:11:16.519438] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:14.792 [2024-08-13 06:11:16.519450] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.793 [2024-08-13 06:11:16.519683] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:16:14.793 [2024-08-13 06:11:16.519790] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:14.793 [2024-08-13 06:11:16.519801] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:16:14.793 [2024-08-13 06:11:16.519889] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.793 NewBaseBdev 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:14.793 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.052 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:15.313 [ 00:16:15.313 { 00:16:15.313 "name": "NewBaseBdev", 00:16:15.313 "aliases": [ 00:16:15.313 "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf" 00:16:15.313 ], 00:16:15.313 "product_name": "Malloc disk", 00:16:15.313 "block_size": 512, 00:16:15.313 "num_blocks": 65536, 00:16:15.313 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:15.313 "assigned_rate_limits": { 00:16:15.313 "rw_ios_per_sec": 0, 00:16:15.313 "rw_mbytes_per_sec": 0, 00:16:15.313 "r_mbytes_per_sec": 0, 00:16:15.313 "w_mbytes_per_sec": 0 00:16:15.313 }, 00:16:15.313 "claimed": true, 00:16:15.313 "claim_type": "exclusive_write", 00:16:15.313 "zoned": false, 00:16:15.313 "supported_io_types": { 00:16:15.313 "read": true, 00:16:15.313 "write": true, 00:16:15.313 "unmap": true, 00:16:15.313 "flush": true, 00:16:15.313 "reset": true, 00:16:15.313 "nvme_admin": false, 00:16:15.313 "nvme_io": false, 00:16:15.313 "nvme_io_md": false, 00:16:15.313 "write_zeroes": true, 00:16:15.313 "zcopy": true, 00:16:15.313 "get_zone_info": false, 00:16:15.313 "zone_management": false, 00:16:15.313 "zone_append": false, 00:16:15.313 "compare": false, 00:16:15.313 "compare_and_write": false, 00:16:15.313 "abort": true, 00:16:15.313 "seek_hole": false, 00:16:15.313 "seek_data": false, 00:16:15.313 "copy": true, 00:16:15.313 "nvme_iov_md": false 00:16:15.313 }, 00:16:15.313 "memory_domains": [ 00:16:15.313 { 00:16:15.313 "dma_device_id": "system", 00:16:15.313 "dma_device_type": 1 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.313 "dma_device_type": 2 00:16:15.313 } 00:16:15.313 ], 00:16:15.313 "driver_specific": {} 00:16:15.313 } 00:16:15.313 ] 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.313 06:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.313 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.313 "name": "Existed_Raid", 00:16:15.313 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:15.313 "strip_size_kb": 0, 00:16:15.313 "state": "online", 00:16:15.313 "raid_level": "raid1", 00:16:15.313 "superblock": true, 00:16:15.313 "num_base_bdevs": 4, 00:16:15.313 "num_base_bdevs_discovered": 4, 00:16:15.313 "num_base_bdevs_operational": 4, 00:16:15.313 "base_bdevs_list": [ 00:16:15.313 { 00:16:15.313 "name": "NewBaseBdev", 00:16:15.313 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 2048, 00:16:15.313 "data_size": 63488 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "name": "BaseBdev2", 00:16:15.313 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 2048, 00:16:15.313 "data_size": 63488 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "name": "BaseBdev3", 00:16:15.313 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 2048, 00:16:15.313 "data_size": 63488 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "name": "BaseBdev4", 00:16:15.313 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 2048, 00:16:15.313 "data_size": 63488 00:16:15.313 } 00:16:15.313 ] 00:16:15.313 }' 00:16:15.572 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.572 06:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:16.141 [2024-08-13 06:11:17.821478] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.141 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:16.141 "name": "Existed_Raid", 00:16:16.141 "aliases": [ 00:16:16.141 "c051e133-4655-43d9-87a2-2fd6b8aa60ac" 00:16:16.141 ], 00:16:16.141 "product_name": "Raid Volume", 00:16:16.141 "block_size": 512, 00:16:16.141 "num_blocks": 63488, 00:16:16.141 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:16.141 "assigned_rate_limits": { 00:16:16.141 "rw_ios_per_sec": 0, 00:16:16.141 "rw_mbytes_per_sec": 0, 00:16:16.141 "r_mbytes_per_sec": 0, 00:16:16.141 "w_mbytes_per_sec": 0 00:16:16.141 }, 00:16:16.141 "claimed": false, 00:16:16.141 "zoned": false, 00:16:16.141 "supported_io_types": { 00:16:16.141 "read": true, 00:16:16.141 "write": true, 00:16:16.141 "unmap": false, 00:16:16.141 "flush": false, 00:16:16.142 "reset": true, 00:16:16.142 "nvme_admin": false, 00:16:16.142 "nvme_io": false, 00:16:16.142 "nvme_io_md": false, 00:16:16.142 "write_zeroes": true, 00:16:16.142 "zcopy": false, 00:16:16.142 "get_zone_info": false, 00:16:16.142 "zone_management": false, 00:16:16.142 "zone_append": false, 00:16:16.142 "compare": false, 00:16:16.142 "compare_and_write": false, 00:16:16.142 "abort": false, 00:16:16.142 "seek_hole": false, 00:16:16.142 "seek_data": false, 00:16:16.142 "copy": false, 00:16:16.142 "nvme_iov_md": false 00:16:16.142 }, 00:16:16.142 "memory_domains": [ 00:16:16.142 { 00:16:16.142 "dma_device_id": "system", 00:16:16.142 "dma_device_type": 1 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.142 "dma_device_type": 2 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "system", 00:16:16.142 "dma_device_type": 1 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.142 "dma_device_type": 2 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "system", 00:16:16.142 "dma_device_type": 1 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.142 "dma_device_type": 2 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "system", 00:16:16.142 "dma_device_type": 1 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.142 "dma_device_type": 2 00:16:16.142 } 00:16:16.142 ], 00:16:16.142 "driver_specific": { 00:16:16.142 "raid": { 00:16:16.142 "uuid": "c051e133-4655-43d9-87a2-2fd6b8aa60ac", 00:16:16.142 "strip_size_kb": 0, 00:16:16.142 "state": "online", 00:16:16.142 "raid_level": "raid1", 00:16:16.142 "superblock": true, 00:16:16.142 "num_base_bdevs": 4, 00:16:16.142 "num_base_bdevs_discovered": 4, 00:16:16.142 "num_base_bdevs_operational": 4, 00:16:16.142 "base_bdevs_list": [ 00:16:16.142 { 00:16:16.142 "name": "NewBaseBdev", 00:16:16.142 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:16.142 "is_configured": true, 00:16:16.142 "data_offset": 2048, 00:16:16.142 "data_size": 63488 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "name": "BaseBdev2", 00:16:16.142 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:16.142 "is_configured": true, 00:16:16.142 "data_offset": 2048, 00:16:16.142 "data_size": 63488 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "name": "BaseBdev3", 00:16:16.142 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:16.142 "is_configured": true, 00:16:16.142 "data_offset": 2048, 00:16:16.142 "data_size": 63488 00:16:16.142 }, 00:16:16.142 { 00:16:16.142 "name": "BaseBdev4", 00:16:16.142 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:16.142 "is_configured": true, 00:16:16.142 "data_offset": 2048, 00:16:16.142 "data_size": 63488 00:16:16.142 } 00:16:16.142 ] 00:16:16.142 } 00:16:16.142 } 00:16:16.142 }' 00:16:16.142 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.142 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:16.142 BaseBdev2 00:16:16.142 BaseBdev3 00:16:16.142 BaseBdev4' 00:16:16.142 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.142 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:16.142 06:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.401 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.401 "name": "NewBaseBdev", 00:16:16.401 "aliases": [ 00:16:16.401 "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf" 00:16:16.401 ], 00:16:16.401 "product_name": "Malloc disk", 00:16:16.401 "block_size": 512, 00:16:16.401 "num_blocks": 65536, 00:16:16.401 "uuid": "7d15df63-7bfd-4d3c-aa90-cc9f06c6e9cf", 00:16:16.401 "assigned_rate_limits": { 00:16:16.401 "rw_ios_per_sec": 0, 00:16:16.401 "rw_mbytes_per_sec": 0, 00:16:16.401 "r_mbytes_per_sec": 0, 00:16:16.401 "w_mbytes_per_sec": 0 00:16:16.401 }, 00:16:16.401 "claimed": true, 00:16:16.401 "claim_type": "exclusive_write", 00:16:16.401 "zoned": false, 00:16:16.401 "supported_io_types": { 00:16:16.401 "read": true, 00:16:16.401 "write": true, 00:16:16.401 "unmap": true, 00:16:16.401 "flush": true, 00:16:16.401 "reset": true, 00:16:16.401 "nvme_admin": false, 00:16:16.401 "nvme_io": false, 00:16:16.401 "nvme_io_md": false, 00:16:16.401 "write_zeroes": true, 00:16:16.401 "zcopy": true, 00:16:16.401 "get_zone_info": false, 00:16:16.401 "zone_management": false, 00:16:16.401 "zone_append": false, 00:16:16.401 "compare": false, 00:16:16.401 "compare_and_write": false, 00:16:16.401 "abort": true, 00:16:16.401 "seek_hole": false, 00:16:16.401 "seek_data": false, 00:16:16.401 "copy": true, 00:16:16.401 "nvme_iov_md": false 00:16:16.401 }, 00:16:16.401 "memory_domains": [ 00:16:16.401 { 00:16:16.401 "dma_device_id": "system", 00:16:16.401 "dma_device_type": 1 00:16:16.401 }, 00:16:16.401 { 00:16:16.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.401 "dma_device_type": 2 00:16:16.401 } 00:16:16.401 ], 00:16:16.401 "driver_specific": {} 00:16:16.401 }' 00:16:16.401 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.401 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.401 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.402 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.402 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:16.661 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.921 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.921 "name": "BaseBdev2", 00:16:16.921 "aliases": [ 00:16:16.921 "2e5e10d7-6605-4814-91e8-ccaec1a2f364" 00:16:16.921 ], 00:16:16.921 "product_name": "Malloc disk", 00:16:16.921 "block_size": 512, 00:16:16.921 "num_blocks": 65536, 00:16:16.921 "uuid": "2e5e10d7-6605-4814-91e8-ccaec1a2f364", 00:16:16.921 "assigned_rate_limits": { 00:16:16.921 "rw_ios_per_sec": 0, 00:16:16.921 "rw_mbytes_per_sec": 0, 00:16:16.921 "r_mbytes_per_sec": 0, 00:16:16.921 "w_mbytes_per_sec": 0 00:16:16.921 }, 00:16:16.921 "claimed": true, 00:16:16.921 "claim_type": "exclusive_write", 00:16:16.921 "zoned": false, 00:16:16.921 "supported_io_types": { 00:16:16.921 "read": true, 00:16:16.921 "write": true, 00:16:16.921 "unmap": true, 00:16:16.921 "flush": true, 00:16:16.921 "reset": true, 00:16:16.921 "nvme_admin": false, 00:16:16.921 "nvme_io": false, 00:16:16.921 "nvme_io_md": false, 00:16:16.921 "write_zeroes": true, 00:16:16.921 "zcopy": true, 00:16:16.921 "get_zone_info": false, 00:16:16.921 "zone_management": false, 00:16:16.921 "zone_append": false, 00:16:16.921 "compare": false, 00:16:16.921 "compare_and_write": false, 00:16:16.921 "abort": true, 00:16:16.921 "seek_hole": false, 00:16:16.921 "seek_data": false, 00:16:16.921 "copy": true, 00:16:16.921 "nvme_iov_md": false 00:16:16.921 }, 00:16:16.921 "memory_domains": [ 00:16:16.921 { 00:16:16.921 "dma_device_id": "system", 00:16:16.921 "dma_device_type": 1 00:16:16.921 }, 00:16:16.921 { 00:16:16.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.921 "dma_device_type": 2 00:16:16.921 } 00:16:16.921 ], 00:16:16.921 "driver_specific": {} 00:16:16.921 }' 00:16:16.921 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.921 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.921 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.921 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.180 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:17.181 06:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.440 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:17.440 "name": "BaseBdev3", 00:16:17.440 "aliases": [ 00:16:17.440 "bc840d97-0299-4ef1-ac03-ead01f12f81c" 00:16:17.440 ], 00:16:17.440 "product_name": "Malloc disk", 00:16:17.440 "block_size": 512, 00:16:17.440 "num_blocks": 65536, 00:16:17.440 "uuid": "bc840d97-0299-4ef1-ac03-ead01f12f81c", 00:16:17.440 "assigned_rate_limits": { 00:16:17.440 "rw_ios_per_sec": 0, 00:16:17.440 "rw_mbytes_per_sec": 0, 00:16:17.440 "r_mbytes_per_sec": 0, 00:16:17.440 "w_mbytes_per_sec": 0 00:16:17.440 }, 00:16:17.440 "claimed": true, 00:16:17.440 "claim_type": "exclusive_write", 00:16:17.440 "zoned": false, 00:16:17.440 "supported_io_types": { 00:16:17.440 "read": true, 00:16:17.440 "write": true, 00:16:17.440 "unmap": true, 00:16:17.440 "flush": true, 00:16:17.440 "reset": true, 00:16:17.440 "nvme_admin": false, 00:16:17.440 "nvme_io": false, 00:16:17.440 "nvme_io_md": false, 00:16:17.440 "write_zeroes": true, 00:16:17.440 "zcopy": true, 00:16:17.440 "get_zone_info": false, 00:16:17.440 "zone_management": false, 00:16:17.440 "zone_append": false, 00:16:17.440 "compare": false, 00:16:17.440 "compare_and_write": false, 00:16:17.440 "abort": true, 00:16:17.440 "seek_hole": false, 00:16:17.440 "seek_data": false, 00:16:17.440 "copy": true, 00:16:17.440 "nvme_iov_md": false 00:16:17.440 }, 00:16:17.440 "memory_domains": [ 00:16:17.440 { 00:16:17.440 "dma_device_id": "system", 00:16:17.440 "dma_device_type": 1 00:16:17.440 }, 00:16:17.440 { 00:16:17.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.440 "dma_device_type": 2 00:16:17.440 } 00:16:17.440 ], 00:16:17.440 "driver_specific": {} 00:16:17.440 }' 00:16:17.440 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.440 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.440 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:17.699 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.958 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:17.958 "name": "BaseBdev4", 00:16:17.958 "aliases": [ 00:16:17.958 "f026fd3d-aed7-4761-a192-4a72db305392" 00:16:17.958 ], 00:16:17.958 "product_name": "Malloc disk", 00:16:17.958 "block_size": 512, 00:16:17.958 "num_blocks": 65536, 00:16:17.958 "uuid": "f026fd3d-aed7-4761-a192-4a72db305392", 00:16:17.958 "assigned_rate_limits": { 00:16:17.958 "rw_ios_per_sec": 0, 00:16:17.958 "rw_mbytes_per_sec": 0, 00:16:17.958 "r_mbytes_per_sec": 0, 00:16:17.958 "w_mbytes_per_sec": 0 00:16:17.958 }, 00:16:17.958 "claimed": true, 00:16:17.958 "claim_type": "exclusive_write", 00:16:17.958 "zoned": false, 00:16:17.958 "supported_io_types": { 00:16:17.958 "read": true, 00:16:17.958 "write": true, 00:16:17.958 "unmap": true, 00:16:17.958 "flush": true, 00:16:17.958 "reset": true, 00:16:17.958 "nvme_admin": false, 00:16:17.958 "nvme_io": false, 00:16:17.958 "nvme_io_md": false, 00:16:17.958 "write_zeroes": true, 00:16:17.958 "zcopy": true, 00:16:17.958 "get_zone_info": false, 00:16:17.958 "zone_management": false, 00:16:17.958 "zone_append": false, 00:16:17.958 "compare": false, 00:16:17.958 "compare_and_write": false, 00:16:17.958 "abort": true, 00:16:17.958 "seek_hole": false, 00:16:17.958 "seek_data": false, 00:16:17.958 "copy": true, 00:16:17.958 "nvme_iov_md": false 00:16:17.958 }, 00:16:17.958 "memory_domains": [ 00:16:17.958 { 00:16:17.958 "dma_device_id": "system", 00:16:17.958 "dma_device_type": 1 00:16:17.958 }, 00:16:17.958 { 00:16:17.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.958 "dma_device_type": 2 00:16:17.958 } 00:16:17.958 ], 00:16:17.958 "driver_specific": {} 00:16:17.958 }' 00:16:17.958 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.958 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.958 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:17.958 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.216 06:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:18.475 [2024-08-13 06:11:20.193209] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.475 [2024-08-13 06:11:20.193240] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.475 [2024-08-13 06:11:20.193314] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.475 [2024-08-13 06:11:20.193548] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.475 [2024-08-13 06:11:20.193557] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 89924 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 89924 ']' 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 89924 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89924 00:16:18.475 killing process with pid 89924 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89924' 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 89924 00:16:18.475 [2024-08-13 06:11:20.254670] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.475 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 89924 00:16:18.734 [2024-08-13 06:11:20.294817] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.994 06:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:18.994 00:16:18.994 real 0m27.825s 00:16:18.994 user 0m51.399s 00:16:18.994 sys 0m4.626s 00:16:18.994 ************************************ 00:16:18.994 END TEST raid_state_function_test_sb 00:16:18.994 ************************************ 00:16:18.994 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:18.994 06:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.994 06:11:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:18.994 06:11:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:18.994 06:11:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:18.994 06:11:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.994 ************************************ 00:16:18.994 START TEST raid_superblock_test 00:16:18.994 ************************************ 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=90927 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 90927 /var/tmp/spdk-raid.sock 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 90927 ']' 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:18.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.994 06:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.994 [2024-08-13 06:11:20.709813] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:16:18.994 [2024-08-13 06:11:20.709965] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90927 ] 00:16:19.254 [2024-08-13 06:11:20.855185] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.254 [2024-08-13 06:11:20.901609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.254 [2024-08-13 06:11:20.944381] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.254 [2024-08-13 06:11:20.944414] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.821 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:20.080 malloc1 00:16:20.080 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.339 [2024-08-13 06:11:21.912511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.339 [2024-08-13 06:11:21.912604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.339 [2024-08-13 06:11:21.912647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:20.339 [2024-08-13 06:11:21.912674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.339 [2024-08-13 06:11:21.914712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.339 [2024-08-13 06:11:21.914782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.339 pt1 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.339 06:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:20.339 malloc2 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.598 [2024-08-13 06:11:22.320367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.598 [2024-08-13 06:11:22.320416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.598 [2024-08-13 06:11:22.320433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.598 [2024-08-13 06:11:22.320441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.598 [2024-08-13 06:11:22.322363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.598 [2024-08-13 06:11:22.322399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.598 pt2 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.598 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:20.857 malloc3 00:16:20.857 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.116 [2024-08-13 06:11:22.761526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.116 [2024-08-13 06:11:22.761627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.116 [2024-08-13 06:11:22.761673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:21.116 [2024-08-13 06:11:22.761712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.116 [2024-08-13 06:11:22.763666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.116 [2024-08-13 06:11:22.763733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.116 pt3 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.117 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:21.376 malloc4 00:16:21.376 06:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:21.636 [2024-08-13 06:11:23.169268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:21.636 [2024-08-13 06:11:23.169352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.636 [2024-08-13 06:11:23.169385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:21.636 [2024-08-13 06:11:23.169409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.636 [2024-08-13 06:11:23.171501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.636 [2024-08-13 06:11:23.171573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:21.636 pt4 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:21.636 [2024-08-13 06:11:23.372928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.636 [2024-08-13 06:11:23.374666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.636 [2024-08-13 06:11:23.374771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.636 [2024-08-13 06:11:23.374828] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:21.636 [2024-08-13 06:11:23.375015] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:21.636 [2024-08-13 06:11:23.375068] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.636 [2024-08-13 06:11:23.375330] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:21.636 [2024-08-13 06:11:23.375507] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:21.636 [2024-08-13 06:11:23.375546] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:21.636 [2024-08-13 06:11:23.375700] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.636 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.896 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.896 "name": "raid_bdev1", 00:16:21.896 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:21.896 "strip_size_kb": 0, 00:16:21.896 "state": "online", 00:16:21.896 "raid_level": "raid1", 00:16:21.896 "superblock": true, 00:16:21.896 "num_base_bdevs": 4, 00:16:21.896 "num_base_bdevs_discovered": 4, 00:16:21.896 "num_base_bdevs_operational": 4, 00:16:21.896 "base_bdevs_list": [ 00:16:21.896 { 00:16:21.896 "name": "pt1", 00:16:21.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.896 "is_configured": true, 00:16:21.896 "data_offset": 2048, 00:16:21.896 "data_size": 63488 00:16:21.896 }, 00:16:21.896 { 00:16:21.896 "name": "pt2", 00:16:21.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.896 "is_configured": true, 00:16:21.896 "data_offset": 2048, 00:16:21.896 "data_size": 63488 00:16:21.896 }, 00:16:21.896 { 00:16:21.896 "name": "pt3", 00:16:21.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.896 "is_configured": true, 00:16:21.896 "data_offset": 2048, 00:16:21.896 "data_size": 63488 00:16:21.896 }, 00:16:21.896 { 00:16:21.896 "name": "pt4", 00:16:21.896 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.896 "is_configured": true, 00:16:21.896 "data_offset": 2048, 00:16:21.896 "data_size": 63488 00:16:21.896 } 00:16:21.896 ] 00:16:21.896 }' 00:16:21.896 06:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.896 06:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:22.463 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:22.723 [2024-08-13 06:11:24.343517] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.723 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:22.723 "name": "raid_bdev1", 00:16:22.723 "aliases": [ 00:16:22.723 "f5f342f6-3617-43c1-8f8a-f69e8cc2d547" 00:16:22.723 ], 00:16:22.723 "product_name": "Raid Volume", 00:16:22.723 "block_size": 512, 00:16:22.723 "num_blocks": 63488, 00:16:22.723 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:22.723 "assigned_rate_limits": { 00:16:22.723 "rw_ios_per_sec": 0, 00:16:22.723 "rw_mbytes_per_sec": 0, 00:16:22.723 "r_mbytes_per_sec": 0, 00:16:22.723 "w_mbytes_per_sec": 0 00:16:22.723 }, 00:16:22.723 "claimed": false, 00:16:22.723 "zoned": false, 00:16:22.723 "supported_io_types": { 00:16:22.723 "read": true, 00:16:22.723 "write": true, 00:16:22.723 "unmap": false, 00:16:22.723 "flush": false, 00:16:22.723 "reset": true, 00:16:22.723 "nvme_admin": false, 00:16:22.723 "nvme_io": false, 00:16:22.723 "nvme_io_md": false, 00:16:22.723 "write_zeroes": true, 00:16:22.723 "zcopy": false, 00:16:22.723 "get_zone_info": false, 00:16:22.723 "zone_management": false, 00:16:22.723 "zone_append": false, 00:16:22.723 "compare": false, 00:16:22.723 "compare_and_write": false, 00:16:22.723 "abort": false, 00:16:22.723 "seek_hole": false, 00:16:22.723 "seek_data": false, 00:16:22.723 "copy": false, 00:16:22.723 "nvme_iov_md": false 00:16:22.723 }, 00:16:22.723 "memory_domains": [ 00:16:22.723 { 00:16:22.723 "dma_device_id": "system", 00:16:22.723 "dma_device_type": 1 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.723 "dma_device_type": 2 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "system", 00:16:22.723 "dma_device_type": 1 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.723 "dma_device_type": 2 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "system", 00:16:22.723 "dma_device_type": 1 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.723 "dma_device_type": 2 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "system", 00:16:22.723 "dma_device_type": 1 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.723 "dma_device_type": 2 00:16:22.723 } 00:16:22.723 ], 00:16:22.723 "driver_specific": { 00:16:22.723 "raid": { 00:16:22.723 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:22.723 "strip_size_kb": 0, 00:16:22.723 "state": "online", 00:16:22.723 "raid_level": "raid1", 00:16:22.723 "superblock": true, 00:16:22.723 "num_base_bdevs": 4, 00:16:22.723 "num_base_bdevs_discovered": 4, 00:16:22.723 "num_base_bdevs_operational": 4, 00:16:22.723 "base_bdevs_list": [ 00:16:22.723 { 00:16:22.723 "name": "pt1", 00:16:22.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.723 "is_configured": true, 00:16:22.723 "data_offset": 2048, 00:16:22.723 "data_size": 63488 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "name": "pt2", 00:16:22.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.723 "is_configured": true, 00:16:22.723 "data_offset": 2048, 00:16:22.723 "data_size": 63488 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "name": "pt3", 00:16:22.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.723 "is_configured": true, 00:16:22.723 "data_offset": 2048, 00:16:22.723 "data_size": 63488 00:16:22.723 }, 00:16:22.723 { 00:16:22.723 "name": "pt4", 00:16:22.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.723 "is_configured": true, 00:16:22.723 "data_offset": 2048, 00:16:22.723 "data_size": 63488 00:16:22.723 } 00:16:22.723 ] 00:16:22.723 } 00:16:22.723 } 00:16:22.723 }' 00:16:22.723 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.723 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:22.723 pt2 00:16:22.723 pt3 00:16:22.723 pt4' 00:16:22.723 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.723 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:22.723 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.983 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.983 "name": "pt1", 00:16:22.983 "aliases": [ 00:16:22.983 "00000000-0000-0000-0000-000000000001" 00:16:22.983 ], 00:16:22.984 "product_name": "passthru", 00:16:22.984 "block_size": 512, 00:16:22.984 "num_blocks": 65536, 00:16:22.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.984 "assigned_rate_limits": { 00:16:22.984 "rw_ios_per_sec": 0, 00:16:22.984 "rw_mbytes_per_sec": 0, 00:16:22.984 "r_mbytes_per_sec": 0, 00:16:22.984 "w_mbytes_per_sec": 0 00:16:22.984 }, 00:16:22.984 "claimed": true, 00:16:22.984 "claim_type": "exclusive_write", 00:16:22.984 "zoned": false, 00:16:22.984 "supported_io_types": { 00:16:22.984 "read": true, 00:16:22.984 "write": true, 00:16:22.984 "unmap": true, 00:16:22.984 "flush": true, 00:16:22.984 "reset": true, 00:16:22.984 "nvme_admin": false, 00:16:22.984 "nvme_io": false, 00:16:22.984 "nvme_io_md": false, 00:16:22.984 "write_zeroes": true, 00:16:22.984 "zcopy": true, 00:16:22.984 "get_zone_info": false, 00:16:22.984 "zone_management": false, 00:16:22.984 "zone_append": false, 00:16:22.984 "compare": false, 00:16:22.984 "compare_and_write": false, 00:16:22.984 "abort": true, 00:16:22.984 "seek_hole": false, 00:16:22.984 "seek_data": false, 00:16:22.984 "copy": true, 00:16:22.984 "nvme_iov_md": false 00:16:22.984 }, 00:16:22.984 "memory_domains": [ 00:16:22.984 { 00:16:22.984 "dma_device_id": "system", 00:16:22.984 "dma_device_type": 1 00:16:22.984 }, 00:16:22.984 { 00:16:22.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.984 "dma_device_type": 2 00:16:22.984 } 00:16:22.984 ], 00:16:22.984 "driver_specific": { 00:16:22.984 "passthru": { 00:16:22.984 "name": "pt1", 00:16:22.984 "base_bdev_name": "malloc1" 00:16:22.984 } 00:16:22.984 } 00:16:22.984 }' 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.984 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.244 06:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:23.503 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.503 "name": "pt2", 00:16:23.503 "aliases": [ 00:16:23.503 "00000000-0000-0000-0000-000000000002" 00:16:23.503 ], 00:16:23.503 "product_name": "passthru", 00:16:23.503 "block_size": 512, 00:16:23.503 "num_blocks": 65536, 00:16:23.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.503 "assigned_rate_limits": { 00:16:23.503 "rw_ios_per_sec": 0, 00:16:23.503 "rw_mbytes_per_sec": 0, 00:16:23.503 "r_mbytes_per_sec": 0, 00:16:23.503 "w_mbytes_per_sec": 0 00:16:23.503 }, 00:16:23.503 "claimed": true, 00:16:23.503 "claim_type": "exclusive_write", 00:16:23.503 "zoned": false, 00:16:23.503 "supported_io_types": { 00:16:23.503 "read": true, 00:16:23.503 "write": true, 00:16:23.503 "unmap": true, 00:16:23.503 "flush": true, 00:16:23.503 "reset": true, 00:16:23.503 "nvme_admin": false, 00:16:23.503 "nvme_io": false, 00:16:23.503 "nvme_io_md": false, 00:16:23.503 "write_zeroes": true, 00:16:23.503 "zcopy": true, 00:16:23.503 "get_zone_info": false, 00:16:23.503 "zone_management": false, 00:16:23.503 "zone_append": false, 00:16:23.503 "compare": false, 00:16:23.503 "compare_and_write": false, 00:16:23.503 "abort": true, 00:16:23.503 "seek_hole": false, 00:16:23.503 "seek_data": false, 00:16:23.503 "copy": true, 00:16:23.503 "nvme_iov_md": false 00:16:23.503 }, 00:16:23.503 "memory_domains": [ 00:16:23.503 { 00:16:23.503 "dma_device_id": "system", 00:16:23.503 "dma_device_type": 1 00:16:23.503 }, 00:16:23.503 { 00:16:23.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.503 "dma_device_type": 2 00:16:23.503 } 00:16:23.503 ], 00:16:23.503 "driver_specific": { 00:16:23.503 "passthru": { 00:16:23.503 "name": "pt2", 00:16:23.504 "base_bdev_name": "malloc2" 00:16:23.504 } 00:16:23.504 } 00:16:23.504 }' 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.504 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:23.763 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.023 "name": "pt3", 00:16:24.023 "aliases": [ 00:16:24.023 "00000000-0000-0000-0000-000000000003" 00:16:24.023 ], 00:16:24.023 "product_name": "passthru", 00:16:24.023 "block_size": 512, 00:16:24.023 "num_blocks": 65536, 00:16:24.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.023 "assigned_rate_limits": { 00:16:24.023 "rw_ios_per_sec": 0, 00:16:24.023 "rw_mbytes_per_sec": 0, 00:16:24.023 "r_mbytes_per_sec": 0, 00:16:24.023 "w_mbytes_per_sec": 0 00:16:24.023 }, 00:16:24.023 "claimed": true, 00:16:24.023 "claim_type": "exclusive_write", 00:16:24.023 "zoned": false, 00:16:24.023 "supported_io_types": { 00:16:24.023 "read": true, 00:16:24.023 "write": true, 00:16:24.023 "unmap": true, 00:16:24.023 "flush": true, 00:16:24.023 "reset": true, 00:16:24.023 "nvme_admin": false, 00:16:24.023 "nvme_io": false, 00:16:24.023 "nvme_io_md": false, 00:16:24.023 "write_zeroes": true, 00:16:24.023 "zcopy": true, 00:16:24.023 "get_zone_info": false, 00:16:24.023 "zone_management": false, 00:16:24.023 "zone_append": false, 00:16:24.023 "compare": false, 00:16:24.023 "compare_and_write": false, 00:16:24.023 "abort": true, 00:16:24.023 "seek_hole": false, 00:16:24.023 "seek_data": false, 00:16:24.023 "copy": true, 00:16:24.023 "nvme_iov_md": false 00:16:24.023 }, 00:16:24.023 "memory_domains": [ 00:16:24.023 { 00:16:24.023 "dma_device_id": "system", 00:16:24.023 "dma_device_type": 1 00:16:24.023 }, 00:16:24.023 { 00:16:24.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.023 "dma_device_type": 2 00:16:24.023 } 00:16:24.023 ], 00:16:24.023 "driver_specific": { 00:16:24.023 "passthru": { 00:16:24.023 "name": "pt3", 00:16:24.023 "base_bdev_name": "malloc3" 00:16:24.023 } 00:16:24.023 } 00:16:24.023 }' 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.023 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:24.283 06:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.543 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.543 "name": "pt4", 00:16:24.543 "aliases": [ 00:16:24.543 "00000000-0000-0000-0000-000000000004" 00:16:24.543 ], 00:16:24.543 "product_name": "passthru", 00:16:24.543 "block_size": 512, 00:16:24.543 "num_blocks": 65536, 00:16:24.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.543 "assigned_rate_limits": { 00:16:24.543 "rw_ios_per_sec": 0, 00:16:24.543 "rw_mbytes_per_sec": 0, 00:16:24.543 "r_mbytes_per_sec": 0, 00:16:24.543 "w_mbytes_per_sec": 0 00:16:24.543 }, 00:16:24.543 "claimed": true, 00:16:24.543 "claim_type": "exclusive_write", 00:16:24.543 "zoned": false, 00:16:24.543 "supported_io_types": { 00:16:24.543 "read": true, 00:16:24.543 "write": true, 00:16:24.543 "unmap": true, 00:16:24.543 "flush": true, 00:16:24.543 "reset": true, 00:16:24.543 "nvme_admin": false, 00:16:24.543 "nvme_io": false, 00:16:24.543 "nvme_io_md": false, 00:16:24.543 "write_zeroes": true, 00:16:24.543 "zcopy": true, 00:16:24.543 "get_zone_info": false, 00:16:24.543 "zone_management": false, 00:16:24.543 "zone_append": false, 00:16:24.543 "compare": false, 00:16:24.543 "compare_and_write": false, 00:16:24.543 "abort": true, 00:16:24.543 "seek_hole": false, 00:16:24.543 "seek_data": false, 00:16:24.543 "copy": true, 00:16:24.543 "nvme_iov_md": false 00:16:24.543 }, 00:16:24.543 "memory_domains": [ 00:16:24.543 { 00:16:24.543 "dma_device_id": "system", 00:16:24.543 "dma_device_type": 1 00:16:24.543 }, 00:16:24.543 { 00:16:24.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.543 "dma_device_type": 2 00:16:24.543 } 00:16:24.543 ], 00:16:24.543 "driver_specific": { 00:16:24.543 "passthru": { 00:16:24.543 "name": "pt4", 00:16:24.543 "base_bdev_name": "malloc4" 00:16:24.543 } 00:16:24.543 } 00:16:24.543 }' 00:16:24.543 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.543 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.543 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.543 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.543 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:24.803 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:25.062 [2024-08-13 06:11:26.719484] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.062 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=f5f342f6-3617-43c1-8f8a-f69e8cc2d547 00:16:25.062 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z f5f342f6-3617-43c1-8f8a-f69e8cc2d547 ']' 00:16:25.062 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:25.322 [2024-08-13 06:11:26.910912] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.322 [2024-08-13 06:11:26.910981] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.322 [2024-08-13 06:11:26.911064] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.322 [2024-08-13 06:11:26.911149] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.322 [2024-08-13 06:11:26.911159] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:25.322 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:25.322 06:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.582 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:25.582 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:25.582 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.582 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:25.582 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.582 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:25.842 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.842 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:26.116 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.116 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:26.398 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:26.398 06:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:26.398 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:26.658 [2024-08-13 06:11:28.340378] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:26.658 [2024-08-13 06:11:28.342220] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:26.658 [2024-08-13 06:11:28.342299] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:26.658 [2024-08-13 06:11:28.342346] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:26.658 [2024-08-13 06:11:28.342408] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:26.658 [2024-08-13 06:11:28.342488] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:26.658 [2024-08-13 06:11:28.342580] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:26.658 [2024-08-13 06:11:28.342677] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:26.658 [2024-08-13 06:11:28.342718] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.658 [2024-08-13 06:11:28.342743] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:26.658 request: 00:16:26.658 { 00:16:26.658 "name": "raid_bdev1", 00:16:26.658 "raid_level": "raid1", 00:16:26.658 "base_bdevs": [ 00:16:26.658 "malloc1", 00:16:26.658 "malloc2", 00:16:26.658 "malloc3", 00:16:26.658 "malloc4" 00:16:26.658 ], 00:16:26.658 "superblock": false, 00:16:26.658 "method": "bdev_raid_create", 00:16:26.658 "req_id": 1 00:16:26.658 } 00:16:26.658 Got JSON-RPC error response 00:16:26.658 response: 00:16:26.658 { 00:16:26.658 "code": -17, 00:16:26.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:26.658 } 00:16:26.658 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:16:26.658 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:16:26.658 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:16:26.658 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:16:26.658 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.658 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:26.917 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:26.917 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:26.917 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.177 [2024-08-13 06:11:28.747658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.177 [2024-08-13 06:11:28.747740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.177 [2024-08-13 06:11:28.747769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:27.177 [2024-08-13 06:11:28.747796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.177 [2024-08-13 06:11:28.749772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.177 [2024-08-13 06:11:28.749851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.177 [2024-08-13 06:11:28.749918] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:27.177 [2024-08-13 06:11:28.749967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.177 pt1 00:16:27.177 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:27.177 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.177 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.177 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.178 "name": "raid_bdev1", 00:16:27.178 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:27.178 "strip_size_kb": 0, 00:16:27.178 "state": "configuring", 00:16:27.178 "raid_level": "raid1", 00:16:27.178 "superblock": true, 00:16:27.178 "num_base_bdevs": 4, 00:16:27.178 "num_base_bdevs_discovered": 1, 00:16:27.178 "num_base_bdevs_operational": 4, 00:16:27.178 "base_bdevs_list": [ 00:16:27.178 { 00:16:27.178 "name": "pt1", 00:16:27.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.178 "is_configured": true, 00:16:27.178 "data_offset": 2048, 00:16:27.178 "data_size": 63488 00:16:27.178 }, 00:16:27.178 { 00:16:27.178 "name": null, 00:16:27.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.178 "is_configured": false, 00:16:27.178 "data_offset": 2048, 00:16:27.178 "data_size": 63488 00:16:27.178 }, 00:16:27.178 { 00:16:27.178 "name": null, 00:16:27.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.178 "is_configured": false, 00:16:27.178 "data_offset": 2048, 00:16:27.178 "data_size": 63488 00:16:27.178 }, 00:16:27.178 { 00:16:27.178 "name": null, 00:16:27.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.178 "is_configured": false, 00:16:27.178 "data_offset": 2048, 00:16:27.178 "data_size": 63488 00:16:27.178 } 00:16:27.178 ] 00:16:27.178 }' 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.178 06:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.747 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:16:27.747 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.006 [2024-08-13 06:11:29.694107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.007 [2024-08-13 06:11:29.694186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.007 [2024-08-13 06:11:29.694216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:28.007 [2024-08-13 06:11:29.694241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.007 [2024-08-13 06:11:29.694549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.007 [2024-08-13 06:11:29.694604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.007 [2024-08-13 06:11:29.694677] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.007 [2024-08-13 06:11:29.694724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.007 pt2 00:16:28.007 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:28.267 [2024-08-13 06:11:29.885857] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.267 06:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.526 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.526 "name": "raid_bdev1", 00:16:28.526 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:28.526 "strip_size_kb": 0, 00:16:28.526 "state": "configuring", 00:16:28.526 "raid_level": "raid1", 00:16:28.526 "superblock": true, 00:16:28.526 "num_base_bdevs": 4, 00:16:28.526 "num_base_bdevs_discovered": 1, 00:16:28.526 "num_base_bdevs_operational": 4, 00:16:28.526 "base_bdevs_list": [ 00:16:28.526 { 00:16:28.526 "name": "pt1", 00:16:28.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.526 "is_configured": true, 00:16:28.526 "data_offset": 2048, 00:16:28.526 "data_size": 63488 00:16:28.526 }, 00:16:28.526 { 00:16:28.526 "name": null, 00:16:28.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.526 "is_configured": false, 00:16:28.526 "data_offset": 2048, 00:16:28.526 "data_size": 63488 00:16:28.526 }, 00:16:28.526 { 00:16:28.526 "name": null, 00:16:28.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.526 "is_configured": false, 00:16:28.526 "data_offset": 2048, 00:16:28.526 "data_size": 63488 00:16:28.526 }, 00:16:28.526 { 00:16:28.526 "name": null, 00:16:28.526 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.526 "is_configured": false, 00:16:28.526 "data_offset": 2048, 00:16:28.526 "data_size": 63488 00:16:28.526 } 00:16:28.526 ] 00:16:28.526 }' 00:16:28.526 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.526 06:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.096 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:29.096 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:29.096 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.096 [2024-08-13 06:11:30.824310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.096 [2024-08-13 06:11:30.824352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.096 [2024-08-13 06:11:30.824369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:29.096 [2024-08-13 06:11:30.824377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.096 [2024-08-13 06:11:30.824668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.096 [2024-08-13 06:11:30.824684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.096 [2024-08-13 06:11:30.824731] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:29.096 [2024-08-13 06:11:30.824745] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.096 pt2 00:16:29.096 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:29.096 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:29.096 06:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:29.356 [2024-08-13 06:11:31.031931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:29.356 [2024-08-13 06:11:31.031969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.356 [2024-08-13 06:11:31.031991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:29.356 [2024-08-13 06:11:31.031998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.356 [2024-08-13 06:11:31.032298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.356 [2024-08-13 06:11:31.032313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:29.356 [2024-08-13 06:11:31.032362] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:29.356 [2024-08-13 06:11:31.032377] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.356 pt3 00:16:29.356 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:29.356 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:29.356 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:29.615 [2024-08-13 06:11:31.227614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:29.616 [2024-08-13 06:11:31.227657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.616 [2024-08-13 06:11:31.227676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:29.616 [2024-08-13 06:11:31.227683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.616 [2024-08-13 06:11:31.227997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.616 [2024-08-13 06:11:31.228012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:29.616 [2024-08-13 06:11:31.228094] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:29.616 [2024-08-13 06:11:31.228111] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:29.616 [2024-08-13 06:11:31.228208] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:29.616 [2024-08-13 06:11:31.228215] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:29.616 [2024-08-13 06:11:31.228417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:29.616 [2024-08-13 06:11:31.228538] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:29.616 [2024-08-13 06:11:31.228560] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:29.616 [2024-08-13 06:11:31.228640] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.616 pt4 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.616 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.875 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.875 "name": "raid_bdev1", 00:16:29.875 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:29.875 "strip_size_kb": 0, 00:16:29.875 "state": "online", 00:16:29.875 "raid_level": "raid1", 00:16:29.875 "superblock": true, 00:16:29.875 "num_base_bdevs": 4, 00:16:29.875 "num_base_bdevs_discovered": 4, 00:16:29.875 "num_base_bdevs_operational": 4, 00:16:29.875 "base_bdevs_list": [ 00:16:29.875 { 00:16:29.875 "name": "pt1", 00:16:29.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.875 "is_configured": true, 00:16:29.875 "data_offset": 2048, 00:16:29.875 "data_size": 63488 00:16:29.875 }, 00:16:29.875 { 00:16:29.875 "name": "pt2", 00:16:29.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.875 "is_configured": true, 00:16:29.875 "data_offset": 2048, 00:16:29.875 "data_size": 63488 00:16:29.875 }, 00:16:29.875 { 00:16:29.875 "name": "pt3", 00:16:29.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.875 "is_configured": true, 00:16:29.875 "data_offset": 2048, 00:16:29.875 "data_size": 63488 00:16:29.875 }, 00:16:29.875 { 00:16:29.875 "name": "pt4", 00:16:29.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.875 "is_configured": true, 00:16:29.875 "data_offset": 2048, 00:16:29.875 "data_size": 63488 00:16:29.875 } 00:16:29.875 ] 00:16:29.875 }' 00:16:29.876 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.876 06:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:30.445 06:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:30.445 [2024-08-13 06:11:32.146351] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.445 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.445 "name": "raid_bdev1", 00:16:30.445 "aliases": [ 00:16:30.445 "f5f342f6-3617-43c1-8f8a-f69e8cc2d547" 00:16:30.445 ], 00:16:30.445 "product_name": "Raid Volume", 00:16:30.445 "block_size": 512, 00:16:30.445 "num_blocks": 63488, 00:16:30.445 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:30.445 "assigned_rate_limits": { 00:16:30.445 "rw_ios_per_sec": 0, 00:16:30.445 "rw_mbytes_per_sec": 0, 00:16:30.445 "r_mbytes_per_sec": 0, 00:16:30.445 "w_mbytes_per_sec": 0 00:16:30.445 }, 00:16:30.445 "claimed": false, 00:16:30.445 "zoned": false, 00:16:30.445 "supported_io_types": { 00:16:30.445 "read": true, 00:16:30.445 "write": true, 00:16:30.445 "unmap": false, 00:16:30.445 "flush": false, 00:16:30.445 "reset": true, 00:16:30.445 "nvme_admin": false, 00:16:30.445 "nvme_io": false, 00:16:30.445 "nvme_io_md": false, 00:16:30.445 "write_zeroes": true, 00:16:30.445 "zcopy": false, 00:16:30.445 "get_zone_info": false, 00:16:30.445 "zone_management": false, 00:16:30.445 "zone_append": false, 00:16:30.445 "compare": false, 00:16:30.445 "compare_and_write": false, 00:16:30.445 "abort": false, 00:16:30.445 "seek_hole": false, 00:16:30.445 "seek_data": false, 00:16:30.445 "copy": false, 00:16:30.445 "nvme_iov_md": false 00:16:30.445 }, 00:16:30.445 "memory_domains": [ 00:16:30.445 { 00:16:30.445 "dma_device_id": "system", 00:16:30.445 "dma_device_type": 1 00:16:30.445 }, 00:16:30.445 { 00:16:30.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.445 "dma_device_type": 2 00:16:30.445 }, 00:16:30.445 { 00:16:30.445 "dma_device_id": "system", 00:16:30.445 "dma_device_type": 1 00:16:30.445 }, 00:16:30.445 { 00:16:30.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.445 "dma_device_type": 2 00:16:30.445 }, 00:16:30.445 { 00:16:30.445 "dma_device_id": "system", 00:16:30.445 "dma_device_type": 1 00:16:30.445 }, 00:16:30.445 { 00:16:30.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.445 "dma_device_type": 2 00:16:30.445 }, 00:16:30.445 { 00:16:30.446 "dma_device_id": "system", 00:16:30.446 "dma_device_type": 1 00:16:30.446 }, 00:16:30.446 { 00:16:30.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.446 "dma_device_type": 2 00:16:30.446 } 00:16:30.446 ], 00:16:30.446 "driver_specific": { 00:16:30.446 "raid": { 00:16:30.446 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:30.446 "strip_size_kb": 0, 00:16:30.446 "state": "online", 00:16:30.446 "raid_level": "raid1", 00:16:30.446 "superblock": true, 00:16:30.446 "num_base_bdevs": 4, 00:16:30.446 "num_base_bdevs_discovered": 4, 00:16:30.446 "num_base_bdevs_operational": 4, 00:16:30.446 "base_bdevs_list": [ 00:16:30.446 { 00:16:30.446 "name": "pt1", 00:16:30.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.446 "is_configured": true, 00:16:30.446 "data_offset": 2048, 00:16:30.446 "data_size": 63488 00:16:30.446 }, 00:16:30.446 { 00:16:30.446 "name": "pt2", 00:16:30.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.446 "is_configured": true, 00:16:30.446 "data_offset": 2048, 00:16:30.446 "data_size": 63488 00:16:30.446 }, 00:16:30.446 { 00:16:30.446 "name": "pt3", 00:16:30.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.446 "is_configured": true, 00:16:30.446 "data_offset": 2048, 00:16:30.446 "data_size": 63488 00:16:30.446 }, 00:16:30.446 { 00:16:30.446 "name": "pt4", 00:16:30.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.446 "is_configured": true, 00:16:30.446 "data_offset": 2048, 00:16:30.446 "data_size": 63488 00:16:30.446 } 00:16:30.446 ] 00:16:30.446 } 00:16:30.446 } 00:16:30.446 }' 00:16:30.446 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.446 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:30.446 pt2 00:16:30.446 pt3 00:16:30.446 pt4' 00:16:30.446 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.446 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:30.446 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.705 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.705 "name": "pt1", 00:16:30.705 "aliases": [ 00:16:30.705 "00000000-0000-0000-0000-000000000001" 00:16:30.705 ], 00:16:30.705 "product_name": "passthru", 00:16:30.705 "block_size": 512, 00:16:30.705 "num_blocks": 65536, 00:16:30.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.705 "assigned_rate_limits": { 00:16:30.705 "rw_ios_per_sec": 0, 00:16:30.705 "rw_mbytes_per_sec": 0, 00:16:30.705 "r_mbytes_per_sec": 0, 00:16:30.705 "w_mbytes_per_sec": 0 00:16:30.705 }, 00:16:30.705 "claimed": true, 00:16:30.705 "claim_type": "exclusive_write", 00:16:30.705 "zoned": false, 00:16:30.705 "supported_io_types": { 00:16:30.705 "read": true, 00:16:30.705 "write": true, 00:16:30.705 "unmap": true, 00:16:30.705 "flush": true, 00:16:30.705 "reset": true, 00:16:30.705 "nvme_admin": false, 00:16:30.705 "nvme_io": false, 00:16:30.705 "nvme_io_md": false, 00:16:30.705 "write_zeroes": true, 00:16:30.705 "zcopy": true, 00:16:30.705 "get_zone_info": false, 00:16:30.705 "zone_management": false, 00:16:30.705 "zone_append": false, 00:16:30.705 "compare": false, 00:16:30.705 "compare_and_write": false, 00:16:30.705 "abort": true, 00:16:30.705 "seek_hole": false, 00:16:30.705 "seek_data": false, 00:16:30.705 "copy": true, 00:16:30.705 "nvme_iov_md": false 00:16:30.705 }, 00:16:30.705 "memory_domains": [ 00:16:30.705 { 00:16:30.705 "dma_device_id": "system", 00:16:30.705 "dma_device_type": 1 00:16:30.705 }, 00:16:30.705 { 00:16:30.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.706 "dma_device_type": 2 00:16:30.706 } 00:16:30.706 ], 00:16:30.706 "driver_specific": { 00:16:30.706 "passthru": { 00:16:30.706 "name": "pt1", 00:16:30.706 "base_bdev_name": "malloc1" 00:16:30.706 } 00:16:30.706 } 00:16:30.706 }' 00:16:30.706 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.706 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.706 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.706 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:30.965 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.224 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.225 "name": "pt2", 00:16:31.225 "aliases": [ 00:16:31.225 "00000000-0000-0000-0000-000000000002" 00:16:31.225 ], 00:16:31.225 "product_name": "passthru", 00:16:31.225 "block_size": 512, 00:16:31.225 "num_blocks": 65536, 00:16:31.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.225 "assigned_rate_limits": { 00:16:31.225 "rw_ios_per_sec": 0, 00:16:31.225 "rw_mbytes_per_sec": 0, 00:16:31.225 "r_mbytes_per_sec": 0, 00:16:31.225 "w_mbytes_per_sec": 0 00:16:31.225 }, 00:16:31.225 "claimed": true, 00:16:31.225 "claim_type": "exclusive_write", 00:16:31.225 "zoned": false, 00:16:31.225 "supported_io_types": { 00:16:31.225 "read": true, 00:16:31.225 "write": true, 00:16:31.225 "unmap": true, 00:16:31.225 "flush": true, 00:16:31.225 "reset": true, 00:16:31.225 "nvme_admin": false, 00:16:31.225 "nvme_io": false, 00:16:31.225 "nvme_io_md": false, 00:16:31.225 "write_zeroes": true, 00:16:31.225 "zcopy": true, 00:16:31.225 "get_zone_info": false, 00:16:31.225 "zone_management": false, 00:16:31.225 "zone_append": false, 00:16:31.225 "compare": false, 00:16:31.225 "compare_and_write": false, 00:16:31.225 "abort": true, 00:16:31.225 "seek_hole": false, 00:16:31.225 "seek_data": false, 00:16:31.225 "copy": true, 00:16:31.225 "nvme_iov_md": false 00:16:31.225 }, 00:16:31.225 "memory_domains": [ 00:16:31.225 { 00:16:31.225 "dma_device_id": "system", 00:16:31.225 "dma_device_type": 1 00:16:31.225 }, 00:16:31.225 { 00:16:31.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.225 "dma_device_type": 2 00:16:31.225 } 00:16:31.225 ], 00:16:31.225 "driver_specific": { 00:16:31.225 "passthru": { 00:16:31.225 "name": "pt2", 00:16:31.225 "base_bdev_name": "malloc2" 00:16:31.225 } 00:16:31.225 } 00:16:31.225 }' 00:16:31.225 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.225 06:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.485 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.744 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.744 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.744 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:31.744 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.744 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.744 "name": "pt3", 00:16:31.744 "aliases": [ 00:16:31.744 "00000000-0000-0000-0000-000000000003" 00:16:31.744 ], 00:16:31.744 "product_name": "passthru", 00:16:31.744 "block_size": 512, 00:16:31.744 "num_blocks": 65536, 00:16:31.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.744 "assigned_rate_limits": { 00:16:31.744 "rw_ios_per_sec": 0, 00:16:31.744 "rw_mbytes_per_sec": 0, 00:16:31.744 "r_mbytes_per_sec": 0, 00:16:31.744 "w_mbytes_per_sec": 0 00:16:31.744 }, 00:16:31.744 "claimed": true, 00:16:31.744 "claim_type": "exclusive_write", 00:16:31.744 "zoned": false, 00:16:31.744 "supported_io_types": { 00:16:31.744 "read": true, 00:16:31.744 "write": true, 00:16:31.744 "unmap": true, 00:16:31.744 "flush": true, 00:16:31.744 "reset": true, 00:16:31.744 "nvme_admin": false, 00:16:31.744 "nvme_io": false, 00:16:31.744 "nvme_io_md": false, 00:16:31.744 "write_zeroes": true, 00:16:31.744 "zcopy": true, 00:16:31.744 "get_zone_info": false, 00:16:31.744 "zone_management": false, 00:16:31.744 "zone_append": false, 00:16:31.744 "compare": false, 00:16:31.744 "compare_and_write": false, 00:16:31.744 "abort": true, 00:16:31.744 "seek_hole": false, 00:16:31.744 "seek_data": false, 00:16:31.744 "copy": true, 00:16:31.744 "nvme_iov_md": false 00:16:31.744 }, 00:16:31.744 "memory_domains": [ 00:16:31.744 { 00:16:31.744 "dma_device_id": "system", 00:16:31.744 "dma_device_type": 1 00:16:31.744 }, 00:16:31.744 { 00:16:31.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.744 "dma_device_type": 2 00:16:31.744 } 00:16:31.744 ], 00:16:31.744 "driver_specific": { 00:16:31.744 "passthru": { 00:16:31.744 "name": "pt3", 00:16:31.744 "base_bdev_name": "malloc3" 00:16:31.744 } 00:16:31.744 } 00:16:31.744 }' 00:16:31.745 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.007 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.008 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.268 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.268 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.268 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.268 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:32.268 06:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.268 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.268 "name": "pt4", 00:16:32.268 "aliases": [ 00:16:32.268 "00000000-0000-0000-0000-000000000004" 00:16:32.268 ], 00:16:32.268 "product_name": "passthru", 00:16:32.268 "block_size": 512, 00:16:32.268 "num_blocks": 65536, 00:16:32.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.268 "assigned_rate_limits": { 00:16:32.268 "rw_ios_per_sec": 0, 00:16:32.268 "rw_mbytes_per_sec": 0, 00:16:32.268 "r_mbytes_per_sec": 0, 00:16:32.268 "w_mbytes_per_sec": 0 00:16:32.268 }, 00:16:32.268 "claimed": true, 00:16:32.268 "claim_type": "exclusive_write", 00:16:32.268 "zoned": false, 00:16:32.268 "supported_io_types": { 00:16:32.268 "read": true, 00:16:32.268 "write": true, 00:16:32.268 "unmap": true, 00:16:32.268 "flush": true, 00:16:32.269 "reset": true, 00:16:32.269 "nvme_admin": false, 00:16:32.269 "nvme_io": false, 00:16:32.269 "nvme_io_md": false, 00:16:32.269 "write_zeroes": true, 00:16:32.269 "zcopy": true, 00:16:32.269 "get_zone_info": false, 00:16:32.269 "zone_management": false, 00:16:32.269 "zone_append": false, 00:16:32.269 "compare": false, 00:16:32.269 "compare_and_write": false, 00:16:32.269 "abort": true, 00:16:32.269 "seek_hole": false, 00:16:32.269 "seek_data": false, 00:16:32.269 "copy": true, 00:16:32.269 "nvme_iov_md": false 00:16:32.269 }, 00:16:32.269 "memory_domains": [ 00:16:32.269 { 00:16:32.269 "dma_device_id": "system", 00:16:32.269 "dma_device_type": 1 00:16:32.269 }, 00:16:32.269 { 00:16:32.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.269 "dma_device_type": 2 00:16:32.269 } 00:16:32.269 ], 00:16:32.269 "driver_specific": { 00:16:32.269 "passthru": { 00:16:32.269 "name": "pt4", 00:16:32.269 "base_bdev_name": "malloc4" 00:16:32.269 } 00:16:32.269 } 00:16:32.269 }' 00:16:32.269 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.528 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.528 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.528 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.528 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.528 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.529 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.529 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.529 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.529 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.788 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.788 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.788 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:32.788 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:32.788 [2024-08-13 06:11:34.546364] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' f5f342f6-3617-43c1-8f8a-f69e8cc2d547 '!=' f5f342f6-3617-43c1-8f8a-f69e8cc2d547 ']' 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:33.048 [2024-08-13 06:11:34.757759] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.048 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.308 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.308 "name": "raid_bdev1", 00:16:33.308 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:33.308 "strip_size_kb": 0, 00:16:33.308 "state": "online", 00:16:33.308 "raid_level": "raid1", 00:16:33.308 "superblock": true, 00:16:33.308 "num_base_bdevs": 4, 00:16:33.308 "num_base_bdevs_discovered": 3, 00:16:33.308 "num_base_bdevs_operational": 3, 00:16:33.308 "base_bdevs_list": [ 00:16:33.308 { 00:16:33.308 "name": null, 00:16:33.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.308 "is_configured": false, 00:16:33.308 "data_offset": 2048, 00:16:33.308 "data_size": 63488 00:16:33.308 }, 00:16:33.308 { 00:16:33.308 "name": "pt2", 00:16:33.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.308 "is_configured": true, 00:16:33.308 "data_offset": 2048, 00:16:33.308 "data_size": 63488 00:16:33.308 }, 00:16:33.308 { 00:16:33.308 "name": "pt3", 00:16:33.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.308 "is_configured": true, 00:16:33.308 "data_offset": 2048, 00:16:33.308 "data_size": 63488 00:16:33.308 }, 00:16:33.308 { 00:16:33.308 "name": "pt4", 00:16:33.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.308 "is_configured": true, 00:16:33.308 "data_offset": 2048, 00:16:33.308 "data_size": 63488 00:16:33.308 } 00:16:33.308 ] 00:16:33.308 }' 00:16:33.308 06:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.308 06:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.878 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:33.878 [2024-08-13 06:11:35.656142] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.878 [2024-08-13 06:11:35.656209] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.878 [2024-08-13 06:11:35.656276] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.878 [2024-08-13 06:11:35.656357] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.878 [2024-08-13 06:11:35.656387] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:34.137 06:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:34.397 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:16:34.397 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:34.397 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:16:34.657 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.916 [2024-08-13 06:11:36.566596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.916 [2024-08-13 06:11:36.566676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.916 [2024-08-13 06:11:36.566707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:34.916 [2024-08-13 06:11:36.566730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.916 [2024-08-13 06:11:36.568628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.916 [2024-08-13 06:11:36.568693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.916 [2024-08-13 06:11:36.568766] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:34.917 [2024-08-13 06:11:36.568809] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.917 pt2 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.917 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.176 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.176 "name": "raid_bdev1", 00:16:35.176 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:35.176 "strip_size_kb": 0, 00:16:35.176 "state": "configuring", 00:16:35.176 "raid_level": "raid1", 00:16:35.176 "superblock": true, 00:16:35.176 "num_base_bdevs": 4, 00:16:35.176 "num_base_bdevs_discovered": 1, 00:16:35.176 "num_base_bdevs_operational": 3, 00:16:35.176 "base_bdevs_list": [ 00:16:35.176 { 00:16:35.176 "name": null, 00:16:35.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.176 "is_configured": false, 00:16:35.176 "data_offset": 2048, 00:16:35.176 "data_size": 63488 00:16:35.176 }, 00:16:35.176 { 00:16:35.176 "name": "pt2", 00:16:35.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.176 "is_configured": true, 00:16:35.176 "data_offset": 2048, 00:16:35.176 "data_size": 63488 00:16:35.176 }, 00:16:35.176 { 00:16:35.176 "name": null, 00:16:35.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.176 "is_configured": false, 00:16:35.176 "data_offset": 2048, 00:16:35.176 "data_size": 63488 00:16:35.176 }, 00:16:35.176 { 00:16:35.176 "name": null, 00:16:35.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.176 "is_configured": false, 00:16:35.176 "data_offset": 2048, 00:16:35.176 "data_size": 63488 00:16:35.176 } 00:16:35.176 ] 00:16:35.176 }' 00:16:35.176 06:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.176 06:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.746 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:16:35.746 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.747 [2024-08-13 06:11:37.497182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.747 [2024-08-13 06:11:37.497260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.747 [2024-08-13 06:11:37.497289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:35.747 [2024-08-13 06:11:37.497311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.747 [2024-08-13 06:11:37.497639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.747 [2024-08-13 06:11:37.497689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.747 [2024-08-13 06:11:37.497772] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.747 [2024-08-13 06:11:37.497841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.747 pt3 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.747 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.007 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.007 "name": "raid_bdev1", 00:16:36.007 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:36.007 "strip_size_kb": 0, 00:16:36.007 "state": "configuring", 00:16:36.007 "raid_level": "raid1", 00:16:36.007 "superblock": true, 00:16:36.007 "num_base_bdevs": 4, 00:16:36.007 "num_base_bdevs_discovered": 2, 00:16:36.007 "num_base_bdevs_operational": 3, 00:16:36.007 "base_bdevs_list": [ 00:16:36.007 { 00:16:36.007 "name": null, 00:16:36.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.007 "is_configured": false, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 }, 00:16:36.007 { 00:16:36.007 "name": "pt2", 00:16:36.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.007 "is_configured": true, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 }, 00:16:36.007 { 00:16:36.007 "name": "pt3", 00:16:36.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.007 "is_configured": true, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 }, 00:16:36.007 { 00:16:36.007 "name": null, 00:16:36.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.007 "is_configured": false, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 } 00:16:36.007 ] 00:16:36.007 }' 00:16:36.007 06:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.007 06:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.576 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:16:36.576 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:16:36.576 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:36.576 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:36.836 [2024-08-13 06:11:38.427560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:36.836 [2024-08-13 06:11:38.427635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.837 [2024-08-13 06:11:38.427653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:36.837 [2024-08-13 06:11:38.427661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.837 [2024-08-13 06:11:38.427959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.837 [2024-08-13 06:11:38.427976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:36.837 [2024-08-13 06:11:38.428027] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:36.837 [2024-08-13 06:11:38.428063] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:36.837 [2024-08-13 06:11:38.428149] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:36.837 [2024-08-13 06:11:38.428158] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:36.837 [2024-08-13 06:11:38.428400] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:36.837 [2024-08-13 06:11:38.428508] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:36.837 [2024-08-13 06:11:38.428531] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:36.837 [2024-08-13 06:11:38.428608] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.837 pt4 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.837 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.097 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.097 "name": "raid_bdev1", 00:16:37.097 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:37.097 "strip_size_kb": 0, 00:16:37.097 "state": "online", 00:16:37.097 "raid_level": "raid1", 00:16:37.097 "superblock": true, 00:16:37.097 "num_base_bdevs": 4, 00:16:37.097 "num_base_bdevs_discovered": 3, 00:16:37.097 "num_base_bdevs_operational": 3, 00:16:37.097 "base_bdevs_list": [ 00:16:37.097 { 00:16:37.097 "name": null, 00:16:37.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.097 "is_configured": false, 00:16:37.097 "data_offset": 2048, 00:16:37.097 "data_size": 63488 00:16:37.097 }, 00:16:37.097 { 00:16:37.097 "name": "pt2", 00:16:37.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.097 "is_configured": true, 00:16:37.097 "data_offset": 2048, 00:16:37.097 "data_size": 63488 00:16:37.097 }, 00:16:37.097 { 00:16:37.097 "name": "pt3", 00:16:37.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.097 "is_configured": true, 00:16:37.097 "data_offset": 2048, 00:16:37.097 "data_size": 63488 00:16:37.097 }, 00:16:37.097 { 00:16:37.097 "name": "pt4", 00:16:37.097 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.097 "is_configured": true, 00:16:37.097 "data_offset": 2048, 00:16:37.097 "data_size": 63488 00:16:37.097 } 00:16:37.097 ] 00:16:37.097 }' 00:16:37.097 06:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.097 06:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.666 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:37.666 [2024-08-13 06:11:39.394024] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.666 [2024-08-13 06:11:39.394096] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.666 [2024-08-13 06:11:39.394160] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.666 [2024-08-13 06:11:39.394229] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.666 [2024-08-13 06:11:39.394262] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:37.666 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:16:37.666 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.925 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:16:37.925 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:16:37.925 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:16:37.925 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:16:37.925 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:38.183 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.442 [2024-08-13 06:11:39.981094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.442 [2024-08-13 06:11:39.981135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.442 [2024-08-13 06:11:39.981148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:38.442 [2024-08-13 06:11:39.981157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.442 [2024-08-13 06:11:39.983163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.442 [2024-08-13 06:11:39.983200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.442 [2024-08-13 06:11:39.983246] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.442 [2024-08-13 06:11:39.983282] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.442 [2024-08-13 06:11:39.983367] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:38.442 [2024-08-13 06:11:39.983380] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.442 [2024-08-13 06:11:39.983391] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:38.442 [2024-08-13 06:11:39.983417] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.442 [2024-08-13 06:11:39.983496] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.442 pt1 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.442 06:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.442 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.442 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.442 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.442 "name": "raid_bdev1", 00:16:38.442 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:38.442 "strip_size_kb": 0, 00:16:38.442 "state": "configuring", 00:16:38.442 "raid_level": "raid1", 00:16:38.442 "superblock": true, 00:16:38.442 "num_base_bdevs": 4, 00:16:38.442 "num_base_bdevs_discovered": 2, 00:16:38.442 "num_base_bdevs_operational": 3, 00:16:38.442 "base_bdevs_list": [ 00:16:38.442 { 00:16:38.442 "name": null, 00:16:38.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.442 "is_configured": false, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "name": "pt2", 00:16:38.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.442 "is_configured": true, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "name": "pt3", 00:16:38.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.442 "is_configured": true, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "name": null, 00:16:38.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.442 "is_configured": false, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 } 00:16:38.442 ] 00:16:38.442 }' 00:16:38.442 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.442 06:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.010 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:39.010 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:39.269 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:16:39.269 06:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:39.529 [2024-08-13 06:11:41.163140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:39.529 [2024-08-13 06:11:41.163217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.529 [2024-08-13 06:11:41.163248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:39.529 [2024-08-13 06:11:41.163271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.529 [2024-08-13 06:11:41.163565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.529 [2024-08-13 06:11:41.163617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:39.529 [2024-08-13 06:11:41.163690] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:39.529 [2024-08-13 06:11:41.163732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:39.529 [2024-08-13 06:11:41.163851] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:39.529 [2024-08-13 06:11:41.163888] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:39.529 [2024-08-13 06:11:41.164129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:39.529 [2024-08-13 06:11:41.164258] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:39.529 [2024-08-13 06:11:41.164297] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:39.529 [2024-08-13 06:11:41.164418] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.529 pt4 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.529 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.789 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.789 "name": "raid_bdev1", 00:16:39.789 "uuid": "f5f342f6-3617-43c1-8f8a-f69e8cc2d547", 00:16:39.789 "strip_size_kb": 0, 00:16:39.789 "state": "online", 00:16:39.789 "raid_level": "raid1", 00:16:39.789 "superblock": true, 00:16:39.789 "num_base_bdevs": 4, 00:16:39.789 "num_base_bdevs_discovered": 3, 00:16:39.789 "num_base_bdevs_operational": 3, 00:16:39.789 "base_bdevs_list": [ 00:16:39.789 { 00:16:39.789 "name": null, 00:16:39.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.789 "is_configured": false, 00:16:39.789 "data_offset": 2048, 00:16:39.789 "data_size": 63488 00:16:39.789 }, 00:16:39.789 { 00:16:39.789 "name": "pt2", 00:16:39.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.789 "is_configured": true, 00:16:39.789 "data_offset": 2048, 00:16:39.789 "data_size": 63488 00:16:39.789 }, 00:16:39.789 { 00:16:39.789 "name": "pt3", 00:16:39.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.789 "is_configured": true, 00:16:39.789 "data_offset": 2048, 00:16:39.789 "data_size": 63488 00:16:39.789 }, 00:16:39.789 { 00:16:39.789 "name": "pt4", 00:16:39.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.789 "is_configured": true, 00:16:39.789 "data_offset": 2048, 00:16:39.789 "data_size": 63488 00:16:39.789 } 00:16:39.789 ] 00:16:39.789 }' 00:16:39.789 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.789 06:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.357 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:40.357 06:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:40.357 06:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:16:40.357 06:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:40.357 06:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:16:40.617 [2024-08-13 06:11:42.273575] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' f5f342f6-3617-43c1-8f8a-f69e8cc2d547 '!=' f5f342f6-3617-43c1-8f8a-f69e8cc2d547 ']' 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 90927 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 90927 ']' 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 90927 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90927 00:16:40.617 killing process with pid 90927 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90927' 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 90927 00:16:40.617 [2024-08-13 06:11:42.338931] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.617 [2024-08-13 06:11:42.339022] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.617 [2024-08-13 06:11:42.339103] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.617 [2024-08-13 06:11:42.339118] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:40.617 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 90927 00:16:40.617 [2024-08-13 06:11:42.382139] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.877 06:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:40.877 00:16:40.877 real 0m22.011s 00:16:40.877 user 0m40.359s 00:16:40.877 sys 0m3.718s 00:16:40.877 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:40.877 06:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.877 ************************************ 00:16:40.877 END TEST raid_superblock_test 00:16:40.877 ************************************ 00:16:41.137 06:11:42 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:41.137 06:11:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:41.137 06:11:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:41.137 06:11:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.137 ************************************ 00:16:41.137 START TEST raid_read_error_test 00:16:41.137 ************************************ 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 read 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.jgRXHjIi7X 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=91721 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 91721 /var/tmp/spdk-raid.sock 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 91721 ']' 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:41.137 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:41.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:41.138 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:41.138 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:41.138 06:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.138 [2024-08-13 06:11:42.822746] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:16:41.138 [2024-08-13 06:11:42.822926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91721 ] 00:16:41.397 [2024-08-13 06:11:42.971562] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.397 [2024-08-13 06:11:43.018754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.397 [2024-08-13 06:11:43.061250] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.397 [2024-08-13 06:11:43.061285] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.966 06:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:41.966 06:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:16:41.966 06:11:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:41.966 06:11:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:42.225 BaseBdev1_malloc 00:16:42.225 06:11:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:42.225 true 00:16:42.225 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:42.484 [2024-08-13 06:11:44.192629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:42.484 [2024-08-13 06:11:44.192682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.484 [2024-08-13 06:11:44.192698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:42.484 [2024-08-13 06:11:44.192716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.484 [2024-08-13 06:11:44.194788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.484 [2024-08-13 06:11:44.194831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.484 BaseBdev1 00:16:42.484 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:42.484 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.743 BaseBdev2_malloc 00:16:42.743 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:43.002 true 00:16:43.002 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:43.261 [2024-08-13 06:11:44.796124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:43.261 [2024-08-13 06:11:44.796178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.261 [2024-08-13 06:11:44.796195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:43.261 [2024-08-13 06:11:44.796206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.261 [2024-08-13 06:11:44.798182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.261 [2024-08-13 06:11:44.798221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:43.261 BaseBdev2 00:16:43.261 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:43.261 06:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:43.261 BaseBdev3_malloc 00:16:43.261 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:43.520 true 00:16:43.520 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:43.780 [2024-08-13 06:11:45.362832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:43.780 [2024-08-13 06:11:45.362895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.780 [2024-08-13 06:11:45.362913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:43.780 [2024-08-13 06:11:45.362924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.780 [2024-08-13 06:11:45.364913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.780 [2024-08-13 06:11:45.364955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:43.780 BaseBdev3 00:16:43.780 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:43.780 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:43.780 BaseBdev4_malloc 00:16:44.040 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:44.040 true 00:16:44.040 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:44.299 [2024-08-13 06:11:45.982397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:44.299 [2024-08-13 06:11:45.982443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.299 [2024-08-13 06:11:45.982459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:44.299 [2024-08-13 06:11:45.982472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.299 [2024-08-13 06:11:45.984411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.299 [2024-08-13 06:11:45.984448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:44.299 BaseBdev4 00:16:44.299 06:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:44.559 [2024-08-13 06:11:46.166214] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.559 [2024-08-13 06:11:46.167886] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.559 [2024-08-13 06:11:46.167961] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.559 [2024-08-13 06:11:46.168017] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.559 [2024-08-13 06:11:46.168213] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:16:44.559 [2024-08-13 06:11:46.168233] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:44.559 [2024-08-13 06:11:46.168457] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:44.559 [2024-08-13 06:11:46.168595] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:16:44.559 [2024-08-13 06:11:46.168606] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:16:44.559 [2024-08-13 06:11:46.168726] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.559 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.819 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.819 "name": "raid_bdev1", 00:16:44.819 "uuid": "4a6bad28-7304-493f-8dc3-3ab7fa4658e3", 00:16:44.819 "strip_size_kb": 0, 00:16:44.819 "state": "online", 00:16:44.819 "raid_level": "raid1", 00:16:44.819 "superblock": true, 00:16:44.819 "num_base_bdevs": 4, 00:16:44.819 "num_base_bdevs_discovered": 4, 00:16:44.819 "num_base_bdevs_operational": 4, 00:16:44.819 "base_bdevs_list": [ 00:16:44.819 { 00:16:44.819 "name": "BaseBdev1", 00:16:44.819 "uuid": "5f5ba61f-bcb4-5213-ac2a-fde265114956", 00:16:44.819 "is_configured": true, 00:16:44.819 "data_offset": 2048, 00:16:44.819 "data_size": 63488 00:16:44.819 }, 00:16:44.819 { 00:16:44.819 "name": "BaseBdev2", 00:16:44.819 "uuid": "71dcd2f1-9f35-5958-a51d-5ec4f44208f0", 00:16:44.819 "is_configured": true, 00:16:44.819 "data_offset": 2048, 00:16:44.819 "data_size": 63488 00:16:44.819 }, 00:16:44.819 { 00:16:44.819 "name": "BaseBdev3", 00:16:44.819 "uuid": "f3fe0dfc-e9ad-53d4-88de-d9ee7255f895", 00:16:44.819 "is_configured": true, 00:16:44.819 "data_offset": 2048, 00:16:44.819 "data_size": 63488 00:16:44.819 }, 00:16:44.819 { 00:16:44.819 "name": "BaseBdev4", 00:16:44.819 "uuid": "a53fd40b-8d04-550d-82a5-3aff2eef6a5f", 00:16:44.819 "is_configured": true, 00:16:44.819 "data_offset": 2048, 00:16:44.819 "data_size": 63488 00:16:44.819 } 00:16:44.819 ] 00:16:44.819 }' 00:16:44.819 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.819 06:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.389 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:45.389 06:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:45.389 [2024-08-13 06:11:46.973161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:46.329 06:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:46.329 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.330 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.589 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.589 "name": "raid_bdev1", 00:16:46.590 "uuid": "4a6bad28-7304-493f-8dc3-3ab7fa4658e3", 00:16:46.590 "strip_size_kb": 0, 00:16:46.590 "state": "online", 00:16:46.590 "raid_level": "raid1", 00:16:46.590 "superblock": true, 00:16:46.590 "num_base_bdevs": 4, 00:16:46.590 "num_base_bdevs_discovered": 4, 00:16:46.590 "num_base_bdevs_operational": 4, 00:16:46.590 "base_bdevs_list": [ 00:16:46.590 { 00:16:46.590 "name": "BaseBdev1", 00:16:46.590 "uuid": "5f5ba61f-bcb4-5213-ac2a-fde265114956", 00:16:46.590 "is_configured": true, 00:16:46.590 "data_offset": 2048, 00:16:46.590 "data_size": 63488 00:16:46.590 }, 00:16:46.590 { 00:16:46.590 "name": "BaseBdev2", 00:16:46.590 "uuid": "71dcd2f1-9f35-5958-a51d-5ec4f44208f0", 00:16:46.590 "is_configured": true, 00:16:46.590 "data_offset": 2048, 00:16:46.590 "data_size": 63488 00:16:46.590 }, 00:16:46.590 { 00:16:46.590 "name": "BaseBdev3", 00:16:46.590 "uuid": "f3fe0dfc-e9ad-53d4-88de-d9ee7255f895", 00:16:46.590 "is_configured": true, 00:16:46.590 "data_offset": 2048, 00:16:46.590 "data_size": 63488 00:16:46.590 }, 00:16:46.590 { 00:16:46.590 "name": "BaseBdev4", 00:16:46.590 "uuid": "a53fd40b-8d04-550d-82a5-3aff2eef6a5f", 00:16:46.590 "is_configured": true, 00:16:46.590 "data_offset": 2048, 00:16:46.590 "data_size": 63488 00:16:46.590 } 00:16:46.590 ] 00:16:46.590 }' 00:16:46.590 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.590 06:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.160 06:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:47.422 [2024-08-13 06:11:49.012522] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.422 [2024-08-13 06:11:49.012575] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.422 [2024-08-13 06:11:49.014924] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.422 [2024-08-13 06:11:49.014998] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.422 [2024-08-13 06:11:49.015121] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.422 [2024-08-13 06:11:49.015135] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:16:47.422 0 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 91721 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 91721 ']' 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 91721 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91721 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:47.422 killing process with pid 91721 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91721' 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 91721 00:16:47.422 [2024-08-13 06:11:49.074803] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.422 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 91721 00:16:47.422 [2024-08-13 06:11:49.109273] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.jgRXHjIi7X 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:47.699 00:16:47.699 real 0m6.645s 00:16:47.699 user 0m10.453s 00:16:47.699 sys 0m1.009s 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:47.699 06:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.699 ************************************ 00:16:47.699 END TEST raid_read_error_test 00:16:47.699 ************************************ 00:16:47.699 06:11:49 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:47.699 06:11:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:47.699 06:11:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:47.699 06:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.699 ************************************ 00:16:47.699 START TEST raid_write_error_test 00:16:47.699 ************************************ 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 write 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.S5Bmvq9xID 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=91900 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 91900 /var/tmp/spdk-raid.sock 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 91900 ']' 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:47.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:47.699 06:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 [2024-08-13 06:11:49.541955] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:16:47.978 [2024-08-13 06:11:49.542114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91900 ] 00:16:47.978 [2024-08-13 06:11:49.688832] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.978 [2024-08-13 06:11:49.734300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.237 [2024-08-13 06:11:49.777823] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.237 [2024-08-13 06:11:49.777859] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.805 06:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:48.805 06:11:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:16:48.805 06:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:48.805 06:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:48.805 BaseBdev1_malloc 00:16:48.805 06:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:49.064 true 00:16:49.064 06:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:49.323 [2024-08-13 06:11:50.894295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:49.323 [2024-08-13 06:11:50.894353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.323 [2024-08-13 06:11:50.894387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:49.323 [2024-08-13 06:11:50.894407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.323 [2024-08-13 06:11:50.896497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.323 [2024-08-13 06:11:50.896545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.323 BaseBdev1 00:16:49.323 06:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:49.323 06:11:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.323 BaseBdev2_malloc 00:16:49.582 06:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:49.582 true 00:16:49.582 06:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:49.841 [2024-08-13 06:11:51.453793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:49.841 [2024-08-13 06:11:51.453849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.841 [2024-08-13 06:11:51.453866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:49.841 [2024-08-13 06:11:51.453875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.841 [2024-08-13 06:11:51.455817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.841 [2024-08-13 06:11:51.455855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.841 BaseBdev2 00:16:49.841 06:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:49.841 06:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:50.101 BaseBdev3_malloc 00:16:50.101 06:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:50.101 true 00:16:50.359 06:11:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:50.359 [2024-08-13 06:11:52.092994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:50.359 [2024-08-13 06:11:52.093052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.359 [2024-08-13 06:11:52.093068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:50.359 [2024-08-13 06:11:52.093078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.360 [2024-08-13 06:11:52.095011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.360 [2024-08-13 06:11:52.095060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:50.360 BaseBdev3 00:16:50.360 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:50.360 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:50.619 BaseBdev4_malloc 00:16:50.619 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:50.878 true 00:16:50.878 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:50.878 [2024-08-13 06:11:52.644545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:50.878 [2024-08-13 06:11:52.644591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.878 [2024-08-13 06:11:52.644623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:50.878 [2024-08-13 06:11:52.644636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.878 [2024-08-13 06:11:52.646528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.878 [2024-08-13 06:11:52.646568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:50.878 BaseBdev4 00:16:51.136 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:51.136 [2024-08-13 06:11:52.852232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.136 [2024-08-13 06:11:52.853918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.136 [2024-08-13 06:11:52.853993] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.136 [2024-08-13 06:11:52.854063] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.136 [2024-08-13 06:11:52.854258] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:16:51.136 [2024-08-13 06:11:52.854279] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.137 [2024-08-13 06:11:52.854523] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:51.137 [2024-08-13 06:11:52.854674] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:16:51.137 [2024-08-13 06:11:52.854691] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:16:51.137 [2024-08-13 06:11:52.854813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.137 06:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.395 06:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.395 "name": "raid_bdev1", 00:16:51.395 "uuid": "f61089d0-22db-43cf-ae5c-a9103aca6616", 00:16:51.395 "strip_size_kb": 0, 00:16:51.395 "state": "online", 00:16:51.395 "raid_level": "raid1", 00:16:51.395 "superblock": true, 00:16:51.395 "num_base_bdevs": 4, 00:16:51.395 "num_base_bdevs_discovered": 4, 00:16:51.395 "num_base_bdevs_operational": 4, 00:16:51.395 "base_bdevs_list": [ 00:16:51.395 { 00:16:51.395 "name": "BaseBdev1", 00:16:51.395 "uuid": "c0c0defe-c86e-5e98-a3a9-0f7b0d20f139", 00:16:51.395 "is_configured": true, 00:16:51.395 "data_offset": 2048, 00:16:51.395 "data_size": 63488 00:16:51.395 }, 00:16:51.395 { 00:16:51.395 "name": "BaseBdev2", 00:16:51.395 "uuid": "c3630486-9893-5dd7-86d9-330cffcf4099", 00:16:51.395 "is_configured": true, 00:16:51.395 "data_offset": 2048, 00:16:51.395 "data_size": 63488 00:16:51.395 }, 00:16:51.395 { 00:16:51.395 "name": "BaseBdev3", 00:16:51.395 "uuid": "3cdb8261-a7b5-5fde-bbc5-645acfe020b3", 00:16:51.395 "is_configured": true, 00:16:51.395 "data_offset": 2048, 00:16:51.395 "data_size": 63488 00:16:51.395 }, 00:16:51.395 { 00:16:51.395 "name": "BaseBdev4", 00:16:51.395 "uuid": "357a8a3d-4f32-5c56-9d9c-676f3120f8c3", 00:16:51.395 "is_configured": true, 00:16:51.395 "data_offset": 2048, 00:16:51.395 "data_size": 63488 00:16:51.395 } 00:16:51.395 ] 00:16:51.395 }' 00:16:51.395 06:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.395 06:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.964 06:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:51.964 06:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:51.964 [2024-08-13 06:11:53.647187] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:52.902 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:53.162 [2024-08-13 06:11:54.738828] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:53.162 [2024-08-13 06:11:54.738903] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.162 [2024-08-13 06:11:54.739140] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.162 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.421 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.421 "name": "raid_bdev1", 00:16:53.421 "uuid": "f61089d0-22db-43cf-ae5c-a9103aca6616", 00:16:53.421 "strip_size_kb": 0, 00:16:53.421 "state": "online", 00:16:53.421 "raid_level": "raid1", 00:16:53.421 "superblock": true, 00:16:53.421 "num_base_bdevs": 4, 00:16:53.421 "num_base_bdevs_discovered": 3, 00:16:53.421 "num_base_bdevs_operational": 3, 00:16:53.421 "base_bdevs_list": [ 00:16:53.421 { 00:16:53.421 "name": null, 00:16:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.421 "is_configured": false, 00:16:53.421 "data_offset": 2048, 00:16:53.421 "data_size": 63488 00:16:53.421 }, 00:16:53.421 { 00:16:53.421 "name": "BaseBdev2", 00:16:53.421 "uuid": "c3630486-9893-5dd7-86d9-330cffcf4099", 00:16:53.421 "is_configured": true, 00:16:53.421 "data_offset": 2048, 00:16:53.421 "data_size": 63488 00:16:53.421 }, 00:16:53.421 { 00:16:53.421 "name": "BaseBdev3", 00:16:53.421 "uuid": "3cdb8261-a7b5-5fde-bbc5-645acfe020b3", 00:16:53.421 "is_configured": true, 00:16:53.421 "data_offset": 2048, 00:16:53.421 "data_size": 63488 00:16:53.421 }, 00:16:53.421 { 00:16:53.421 "name": "BaseBdev4", 00:16:53.421 "uuid": "357a8a3d-4f32-5c56-9d9c-676f3120f8c3", 00:16:53.421 "is_configured": true, 00:16:53.421 "data_offset": 2048, 00:16:53.421 "data_size": 63488 00:16:53.421 } 00:16:53.421 ] 00:16:53.421 }' 00:16:53.421 06:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.421 06:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:53.990 [2024-08-13 06:11:55.718681] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.990 [2024-08-13 06:11:55.718727] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.990 [2024-08-13 06:11:55.721063] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.990 [2024-08-13 06:11:55.721108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.990 [2024-08-13 06:11:55.721204] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.990 [2024-08-13 06:11:55.721214] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:16:53.990 0 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 91900 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 91900 ']' 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 91900 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:53.990 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91900 00:16:54.250 killing process with pid 91900 00:16:54.250 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:54.250 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:54.250 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91900' 00:16:54.250 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 91900 00:16:54.250 [2024-08-13 06:11:55.785984] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.250 06:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 91900 00:16:54.250 [2024-08-13 06:11:55.821783] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.S5Bmvq9xID 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:54.510 00:16:54.510 real 0m6.634s 00:16:54.510 user 0m10.421s 00:16:54.510 sys 0m1.017s 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:54.510 06:11:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.510 ************************************ 00:16:54.510 END TEST raid_write_error_test 00:16:54.510 ************************************ 00:16:54.510 06:11:56 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:16:54.510 06:11:56 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:16:54.510 06:11:56 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:54.510 06:11:56 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:54.510 06:11:56 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.510 06:11:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.510 ************************************ 00:16:54.510 START TEST raid_rebuild_test 00:16:54.510 ************************************ 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false false true 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=92083 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 92083 /var/tmp/spdk-raid.sock 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 92083 ']' 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:54.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:54.510 06:11:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.510 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:54.510 Zero copy mechanism will not be used. 00:16:54.510 [2024-08-13 06:11:56.254773] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:16:54.510 [2024-08-13 06:11:56.254911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92083 ] 00:16:54.770 [2024-08-13 06:11:56.402238] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.770 [2024-08-13 06:11:56.448117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.770 [2024-08-13 06:11:56.490944] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.770 [2024-08-13 06:11:56.490998] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.338 06:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:55.338 06:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:16:55.338 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:16:55.338 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:55.597 BaseBdev1_malloc 00:16:55.597 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:55.856 [2024-08-13 06:11:57.463109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:55.856 [2024-08-13 06:11:57.463190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.856 [2024-08-13 06:11:57.463218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:55.856 [2024-08-13 06:11:57.463236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.856 [2024-08-13 06:11:57.465236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.856 [2024-08-13 06:11:57.465277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.856 BaseBdev1 00:16:55.856 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:16:55.856 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:56.115 BaseBdev2_malloc 00:16:56.115 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:56.115 [2024-08-13 06:11:57.851105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:56.115 [2024-08-13 06:11:57.851160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.115 [2024-08-13 06:11:57.851179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:56.115 [2024-08-13 06:11:57.851189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.115 [2024-08-13 06:11:57.853153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.115 [2024-08-13 06:11:57.853194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:56.115 BaseBdev2 00:16:56.115 06:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:16:56.374 spare_malloc 00:16:56.374 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:56.634 spare_delay 00:16:56.634 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:16:56.634 [2024-08-13 06:11:58.417899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:56.634 [2024-08-13 06:11:58.417952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.634 [2024-08-13 06:11:58.417986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:56.634 [2024-08-13 06:11:58.417996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.634 [2024-08-13 06:11:58.419983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.634 [2024-08-13 06:11:58.420018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:56.634 spare 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:16:56.894 [2024-08-13 06:11:58.625653] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.894 [2024-08-13 06:11:58.627376] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.894 [2024-08-13 06:11:58.627495] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:56.894 [2024-08-13 06:11:58.627510] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:56.894 [2024-08-13 06:11:58.627735] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:56.894 [2024-08-13 06:11:58.627865] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:56.894 [2024-08-13 06:11:58.627898] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:56.894 [2024-08-13 06:11:58.628015] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.894 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.153 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.153 "name": "raid_bdev1", 00:16:57.153 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:16:57.153 "strip_size_kb": 0, 00:16:57.153 "state": "online", 00:16:57.153 "raid_level": "raid1", 00:16:57.153 "superblock": false, 00:16:57.153 "num_base_bdevs": 2, 00:16:57.153 "num_base_bdevs_discovered": 2, 00:16:57.153 "num_base_bdevs_operational": 2, 00:16:57.153 "base_bdevs_list": [ 00:16:57.153 { 00:16:57.153 "name": "BaseBdev1", 00:16:57.153 "uuid": "33a4ef11-792a-5822-8c10-08d56d1c3cb0", 00:16:57.153 "is_configured": true, 00:16:57.153 "data_offset": 0, 00:16:57.153 "data_size": 65536 00:16:57.153 }, 00:16:57.153 { 00:16:57.153 "name": "BaseBdev2", 00:16:57.153 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:16:57.153 "is_configured": true, 00:16:57.153 "data_offset": 0, 00:16:57.153 "data_size": 65536 00:16:57.153 } 00:16:57.153 ] 00:16:57.153 }' 00:16:57.153 06:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.153 06:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.722 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:57.722 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:16:57.981 [2024-08-13 06:11:59.552262] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.981 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:16:57.981 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:57.981 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:58.241 [2024-08-13 06:11:59.967363] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:58.241 /dev/nbd0 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.241 06:11:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.241 1+0 records in 00:16:58.241 1+0 records out 00:16:58.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413771 s, 9.9 MB/s 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:16:58.241 06:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:03.514 65536+0 records in 00:17:03.514 65536+0 records out 00:17:03.514 33554432 bytes (34 MB, 32 MiB) copied, 4.21111 s, 8.0 MB/s 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:03.514 [2024-08-13 06:12:04.427438] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:03.514 [2024-08-13 06:12:04.659133] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.514 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.514 "name": "raid_bdev1", 00:17:03.514 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:03.514 "strip_size_kb": 0, 00:17:03.514 "state": "online", 00:17:03.514 "raid_level": "raid1", 00:17:03.514 "superblock": false, 00:17:03.514 "num_base_bdevs": 2, 00:17:03.514 "num_base_bdevs_discovered": 1, 00:17:03.514 "num_base_bdevs_operational": 1, 00:17:03.514 "base_bdevs_list": [ 00:17:03.514 { 00:17:03.514 "name": null, 00:17:03.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.514 "is_configured": false, 00:17:03.514 "data_offset": 0, 00:17:03.514 "data_size": 65536 00:17:03.514 }, 00:17:03.514 { 00:17:03.514 "name": "BaseBdev2", 00:17:03.515 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:03.515 "is_configured": true, 00:17:03.515 "data_offset": 0, 00:17:03.515 "data_size": 65536 00:17:03.515 } 00:17:03.515 ] 00:17:03.515 }' 00:17:03.515 06:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.515 06:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.774 06:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.033 [2024-08-13 06:12:05.589544] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.033 [2024-08-13 06:12:05.593662] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:17:04.033 [2024-08-13 06:12:05.595469] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.033 06:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.972 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.231 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.231 "name": "raid_bdev1", 00:17:05.231 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:05.231 "strip_size_kb": 0, 00:17:05.231 "state": "online", 00:17:05.231 "raid_level": "raid1", 00:17:05.231 "superblock": false, 00:17:05.231 "num_base_bdevs": 2, 00:17:05.231 "num_base_bdevs_discovered": 2, 00:17:05.231 "num_base_bdevs_operational": 2, 00:17:05.231 "process": { 00:17:05.231 "type": "rebuild", 00:17:05.231 "target": "spare", 00:17:05.231 "progress": { 00:17:05.231 "blocks": 22528, 00:17:05.231 "percent": 34 00:17:05.231 } 00:17:05.231 }, 00:17:05.231 "base_bdevs_list": [ 00:17:05.231 { 00:17:05.231 "name": "spare", 00:17:05.231 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:05.231 "is_configured": true, 00:17:05.231 "data_offset": 0, 00:17:05.231 "data_size": 65536 00:17:05.231 }, 00:17:05.231 { 00:17:05.231 "name": "BaseBdev2", 00:17:05.231 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:05.231 "is_configured": true, 00:17:05.231 "data_offset": 0, 00:17:05.231 "data_size": 65536 00:17:05.231 } 00:17:05.231 ] 00:17:05.231 }' 00:17:05.231 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:05.231 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.231 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:05.231 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.231 06:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:05.490 [2024-08-13 06:12:07.052119] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.490 [2024-08-13 06:12:07.100856] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:05.490 [2024-08-13 06:12:07.100906] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.490 [2024-08-13 06:12:07.100919] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.490 [2024-08-13 06:12:07.100930] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.490 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.749 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.749 "name": "raid_bdev1", 00:17:05.749 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:05.749 "strip_size_kb": 0, 00:17:05.749 "state": "online", 00:17:05.749 "raid_level": "raid1", 00:17:05.749 "superblock": false, 00:17:05.749 "num_base_bdevs": 2, 00:17:05.749 "num_base_bdevs_discovered": 1, 00:17:05.749 "num_base_bdevs_operational": 1, 00:17:05.749 "base_bdevs_list": [ 00:17:05.749 { 00:17:05.749 "name": null, 00:17:05.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.749 "is_configured": false, 00:17:05.749 "data_offset": 0, 00:17:05.749 "data_size": 65536 00:17:05.749 }, 00:17:05.749 { 00:17:05.749 "name": "BaseBdev2", 00:17:05.749 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:05.749 "is_configured": true, 00:17:05.749 "data_offset": 0, 00:17:05.749 "data_size": 65536 00:17:05.749 } 00:17:05.749 ] 00:17:05.749 }' 00:17:05.749 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.749 06:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.316 06:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.574 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:06.574 "name": "raid_bdev1", 00:17:06.574 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:06.574 "strip_size_kb": 0, 00:17:06.574 "state": "online", 00:17:06.574 "raid_level": "raid1", 00:17:06.574 "superblock": false, 00:17:06.574 "num_base_bdevs": 2, 00:17:06.574 "num_base_bdevs_discovered": 1, 00:17:06.574 "num_base_bdevs_operational": 1, 00:17:06.574 "base_bdevs_list": [ 00:17:06.574 { 00:17:06.574 "name": null, 00:17:06.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.574 "is_configured": false, 00:17:06.574 "data_offset": 0, 00:17:06.575 "data_size": 65536 00:17:06.575 }, 00:17:06.575 { 00:17:06.575 "name": "BaseBdev2", 00:17:06.575 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:06.575 "is_configured": true, 00:17:06.575 "data_offset": 0, 00:17:06.575 "data_size": 65536 00:17:06.575 } 00:17:06.575 ] 00:17:06.575 }' 00:17:06.575 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:06.575 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:06.575 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:06.575 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:06.575 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.834 [2024-08-13 06:12:08.394532] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.834 [2024-08-13 06:12:08.398112] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:17:06.834 [2024-08-13 06:12:08.399831] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.834 06:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.770 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.029 "name": "raid_bdev1", 00:17:08.029 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:08.029 "strip_size_kb": 0, 00:17:08.029 "state": "online", 00:17:08.029 "raid_level": "raid1", 00:17:08.029 "superblock": false, 00:17:08.029 "num_base_bdevs": 2, 00:17:08.029 "num_base_bdevs_discovered": 2, 00:17:08.029 "num_base_bdevs_operational": 2, 00:17:08.029 "process": { 00:17:08.029 "type": "rebuild", 00:17:08.029 "target": "spare", 00:17:08.029 "progress": { 00:17:08.029 "blocks": 22528, 00:17:08.029 "percent": 34 00:17:08.029 } 00:17:08.029 }, 00:17:08.029 "base_bdevs_list": [ 00:17:08.029 { 00:17:08.029 "name": "spare", 00:17:08.029 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:08.029 "is_configured": true, 00:17:08.029 "data_offset": 0, 00:17:08.029 "data_size": 65536 00:17:08.029 }, 00:17:08.029 { 00:17:08.029 "name": "BaseBdev2", 00:17:08.029 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:08.029 "is_configured": true, 00:17:08.029 "data_offset": 0, 00:17:08.029 "data_size": 65536 00:17:08.029 } 00:17:08.029 ] 00:17:08.029 }' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=681 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.029 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.288 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.288 "name": "raid_bdev1", 00:17:08.288 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:08.288 "strip_size_kb": 0, 00:17:08.288 "state": "online", 00:17:08.288 "raid_level": "raid1", 00:17:08.288 "superblock": false, 00:17:08.288 "num_base_bdevs": 2, 00:17:08.288 "num_base_bdevs_discovered": 2, 00:17:08.288 "num_base_bdevs_operational": 2, 00:17:08.288 "process": { 00:17:08.288 "type": "rebuild", 00:17:08.288 "target": "spare", 00:17:08.288 "progress": { 00:17:08.288 "blocks": 30720, 00:17:08.288 "percent": 46 00:17:08.288 } 00:17:08.288 }, 00:17:08.288 "base_bdevs_list": [ 00:17:08.288 { 00:17:08.288 "name": "spare", 00:17:08.288 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:08.288 "is_configured": true, 00:17:08.288 "data_offset": 0, 00:17:08.288 "data_size": 65536 00:17:08.288 }, 00:17:08.288 { 00:17:08.288 "name": "BaseBdev2", 00:17:08.288 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:08.288 "is_configured": true, 00:17:08.288 "data_offset": 0, 00:17:08.288 "data_size": 65536 00:17:08.288 } 00:17:08.288 ] 00:17:08.288 }' 00:17:08.288 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:08.288 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.288 06:12:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:08.288 06:12:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.288 06:12:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.665 "name": "raid_bdev1", 00:17:09.665 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:09.665 "strip_size_kb": 0, 00:17:09.665 "state": "online", 00:17:09.665 "raid_level": "raid1", 00:17:09.665 "superblock": false, 00:17:09.665 "num_base_bdevs": 2, 00:17:09.665 "num_base_bdevs_discovered": 2, 00:17:09.665 "num_base_bdevs_operational": 2, 00:17:09.665 "process": { 00:17:09.665 "type": "rebuild", 00:17:09.665 "target": "spare", 00:17:09.665 "progress": { 00:17:09.665 "blocks": 55296, 00:17:09.665 "percent": 84 00:17:09.665 } 00:17:09.665 }, 00:17:09.665 "base_bdevs_list": [ 00:17:09.665 { 00:17:09.665 "name": "spare", 00:17:09.665 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:09.665 "is_configured": true, 00:17:09.665 "data_offset": 0, 00:17:09.665 "data_size": 65536 00:17:09.665 }, 00:17:09.665 { 00:17:09.665 "name": "BaseBdev2", 00:17:09.665 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:09.665 "is_configured": true, 00:17:09.665 "data_offset": 0, 00:17:09.665 "data_size": 65536 00:17:09.665 } 00:17:09.665 ] 00:17:09.665 }' 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.665 06:12:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:09.925 [2024-08-13 06:12:11.610092] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:09.925 [2024-08-13 06:12:11.610195] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:09.925 [2024-08-13 06:12:11.610279] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.863 "name": "raid_bdev1", 00:17:10.863 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:10.863 "strip_size_kb": 0, 00:17:10.863 "state": "online", 00:17:10.863 "raid_level": "raid1", 00:17:10.863 "superblock": false, 00:17:10.863 "num_base_bdevs": 2, 00:17:10.863 "num_base_bdevs_discovered": 2, 00:17:10.863 "num_base_bdevs_operational": 2, 00:17:10.863 "base_bdevs_list": [ 00:17:10.863 { 00:17:10.863 "name": "spare", 00:17:10.863 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:10.863 "is_configured": true, 00:17:10.863 "data_offset": 0, 00:17:10.863 "data_size": 65536 00:17:10.863 }, 00:17:10.863 { 00:17:10.863 "name": "BaseBdev2", 00:17:10.863 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:10.863 "is_configured": true, 00:17:10.863 "data_offset": 0, 00:17:10.863 "data_size": 65536 00:17:10.863 } 00:17:10.863 ] 00:17:10.863 }' 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.863 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.122 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:11.122 "name": "raid_bdev1", 00:17:11.122 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:11.122 "strip_size_kb": 0, 00:17:11.122 "state": "online", 00:17:11.122 "raid_level": "raid1", 00:17:11.122 "superblock": false, 00:17:11.122 "num_base_bdevs": 2, 00:17:11.122 "num_base_bdevs_discovered": 2, 00:17:11.122 "num_base_bdevs_operational": 2, 00:17:11.122 "base_bdevs_list": [ 00:17:11.122 { 00:17:11.122 "name": "spare", 00:17:11.122 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:11.122 "is_configured": true, 00:17:11.122 "data_offset": 0, 00:17:11.122 "data_size": 65536 00:17:11.122 }, 00:17:11.122 { 00:17:11.122 "name": "BaseBdev2", 00:17:11.122 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:11.122 "is_configured": true, 00:17:11.122 "data_offset": 0, 00:17:11.122 "data_size": 65536 00:17:11.122 } 00:17:11.122 ] 00:17:11.122 }' 00:17:11.122 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:11.122 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:11.122 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.382 06:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.382 06:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.382 "name": "raid_bdev1", 00:17:11.382 "uuid": "0b1716ae-903d-4c14-b9ab-b8022a89e1bf", 00:17:11.382 "strip_size_kb": 0, 00:17:11.382 "state": "online", 00:17:11.382 "raid_level": "raid1", 00:17:11.382 "superblock": false, 00:17:11.382 "num_base_bdevs": 2, 00:17:11.382 "num_base_bdevs_discovered": 2, 00:17:11.382 "num_base_bdevs_operational": 2, 00:17:11.382 "base_bdevs_list": [ 00:17:11.382 { 00:17:11.382 "name": "spare", 00:17:11.382 "uuid": "5d47ade8-5f85-5b66-adaa-a1d266ebb998", 00:17:11.382 "is_configured": true, 00:17:11.382 "data_offset": 0, 00:17:11.382 "data_size": 65536 00:17:11.382 }, 00:17:11.382 { 00:17:11.382 "name": "BaseBdev2", 00:17:11.382 "uuid": "8f47af76-c869-5c19-9a7a-2d94fd02e059", 00:17:11.382 "is_configured": true, 00:17:11.382 "data_offset": 0, 00:17:11.382 "data_size": 65536 00:17:11.382 } 00:17:11.382 ] 00:17:11.382 }' 00:17:11.382 06:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.382 06:12:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.950 06:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:12.224 [2024-08-13 06:12:13.826270] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.224 [2024-08-13 06:12:13.826305] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.224 [2024-08-13 06:12:13.826374] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.224 [2024-08-13 06:12:13.826448] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.224 [2024-08-13 06:12:13.826459] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:12.224 06:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.224 06:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:12.501 /dev/nbd0 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.501 1+0 records in 00:17:12.501 1+0 records out 00:17:12.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612664 s, 6.7 MB/s 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:17:12.501 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:17:12.761 /dev/nbd1 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.761 1+0 records in 00:17:12.761 1+0 records out 00:17:12.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408955 s, 10.0 MB/s 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.761 06:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:13.020 06:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:17:13.021 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:13.021 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.021 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.021 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:13.021 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.021 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.280 06:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 92083 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 92083 ']' 00:17:13.280 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 92083 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92083 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:13.540 killing process with pid 92083 00:17:13.540 Received shutdown signal, test time was about 60.000000 seconds 00:17:13.540 00:17:13.540 Latency(us) 00:17:13.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.540 =================================================================================================================== 00:17:13.540 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92083' 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 92083 00:17:13.540 [2024-08-13 06:12:15.114419] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.540 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 92083 00:17:13.540 [2024-08-13 06:12:15.145420] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:17:13.800 00:17:13.800 real 0m19.231s 00:17:13.800 user 0m25.664s 00:17:13.800 sys 0m3.805s 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:13.800 ************************************ 00:17:13.800 END TEST raid_rebuild_test 00:17:13.800 ************************************ 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.800 06:12:15 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:13.800 06:12:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:17:13.800 06:12:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:13.800 06:12:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.800 ************************************ 00:17:13.800 START TEST raid_rebuild_test_sb 00:17:13.800 ************************************ 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=92554 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 92554 /var/tmp/spdk-raid.sock 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 92554 ']' 00:17:13.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:13.800 06:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.800 [2024-08-13 06:12:15.568579] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:17:13.800 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:13.800 Zero copy mechanism will not be used. 00:17:13.800 [2024-08-13 06:12:15.568863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92554 ] 00:17:14.060 [2024-08-13 06:12:15.718371] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.060 [2024-08-13 06:12:15.766193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.060 [2024-08-13 06:12:15.810003] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.060 [2024-08-13 06:12:15.810053] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.628 06:12:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.628 06:12:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:17:14.628 06:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:14.628 06:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.888 BaseBdev1_malloc 00:17:14.888 06:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:15.147 [2024-08-13 06:12:16.750744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:15.147 [2024-08-13 06:12:16.750843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.147 [2024-08-13 06:12:16.750886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:15.147 [2024-08-13 06:12:16.750921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.147 [2024-08-13 06:12:16.752858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.147 [2024-08-13 06:12:16.752928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:15.147 BaseBdev1 00:17:15.147 06:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:15.147 06:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:15.406 BaseBdev2_malloc 00:17:15.406 06:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:15.406 [2024-08-13 06:12:17.174661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:15.406 [2024-08-13 06:12:17.174761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.406 [2024-08-13 06:12:17.174799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:15.406 [2024-08-13 06:12:17.174828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.406 [2024-08-13 06:12:17.176803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.406 [2024-08-13 06:12:17.176876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:15.406 BaseBdev2 00:17:15.666 06:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:17:15.666 spare_malloc 00:17:15.666 06:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:15.925 spare_delay 00:17:15.925 06:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:16.184 [2024-08-13 06:12:17.799304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.184 [2024-08-13 06:12:17.799411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.184 [2024-08-13 06:12:17.799452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:16.184 [2024-08-13 06:12:17.799481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.184 [2024-08-13 06:12:17.801568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.184 [2024-08-13 06:12:17.801641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.184 spare 00:17:16.184 06:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:16.444 [2024-08-13 06:12:18.003039] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.444 [2024-08-13 06:12:18.004773] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.444 [2024-08-13 06:12:18.004961] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:16.444 [2024-08-13 06:12:18.005002] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.444 [2024-08-13 06:12:18.005337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:16.444 [2024-08-13 06:12:18.005482] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:16.444 [2024-08-13 06:12:18.005497] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:16.444 [2024-08-13 06:12:18.005625] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.444 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.703 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.703 "name": "raid_bdev1", 00:17:16.703 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:16.703 "strip_size_kb": 0, 00:17:16.703 "state": "online", 00:17:16.703 "raid_level": "raid1", 00:17:16.703 "superblock": true, 00:17:16.703 "num_base_bdevs": 2, 00:17:16.703 "num_base_bdevs_discovered": 2, 00:17:16.703 "num_base_bdevs_operational": 2, 00:17:16.703 "base_bdevs_list": [ 00:17:16.703 { 00:17:16.703 "name": "BaseBdev1", 00:17:16.703 "uuid": "7d14ed27-c7ac-5050-8272-e8ecc2a4a084", 00:17:16.703 "is_configured": true, 00:17:16.703 "data_offset": 2048, 00:17:16.703 "data_size": 63488 00:17:16.703 }, 00:17:16.703 { 00:17:16.703 "name": "BaseBdev2", 00:17:16.703 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:16.703 "is_configured": true, 00:17:16.703 "data_offset": 2048, 00:17:16.703 "data_size": 63488 00:17:16.703 } 00:17:16.703 ] 00:17:16.703 }' 00:17:16.703 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.703 06:12:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.272 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:17:17.272 06:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:17.272 [2024-08-13 06:12:18.973666] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.272 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:17:17.272 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:17.272 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.531 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:17.791 [2024-08-13 06:12:19.396960] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:17.791 /dev/nbd0 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.791 1+0 records in 00:17:17.791 1+0 records out 00:17:17.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464889 s, 8.8 MB/s 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:17:17.791 06:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:21.984 63488+0 records in 00:17:21.984 63488+0 records out 00:17:21.984 32505856 bytes (33 MB, 31 MiB) copied, 4.21461 s, 7.7 MB/s 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.984 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.243 [2024-08-13 06:12:23.900290] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.243 06:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:22.503 [2024-08-13 06:12:24.064237] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.503 "name": "raid_bdev1", 00:17:22.503 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:22.503 "strip_size_kb": 0, 00:17:22.503 "state": "online", 00:17:22.503 "raid_level": "raid1", 00:17:22.503 "superblock": true, 00:17:22.503 "num_base_bdevs": 2, 00:17:22.503 "num_base_bdevs_discovered": 1, 00:17:22.503 "num_base_bdevs_operational": 1, 00:17:22.503 "base_bdevs_list": [ 00:17:22.503 { 00:17:22.503 "name": null, 00:17:22.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.503 "is_configured": false, 00:17:22.503 "data_offset": 2048, 00:17:22.503 "data_size": 63488 00:17:22.503 }, 00:17:22.503 { 00:17:22.503 "name": "BaseBdev2", 00:17:22.503 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:22.503 "is_configured": true, 00:17:22.503 "data_offset": 2048, 00:17:22.503 "data_size": 63488 00:17:22.503 } 00:17:22.503 ] 00:17:22.503 }' 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.503 06:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.072 06:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:23.332 [2024-08-13 06:12:25.034567] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.332 [2024-08-13 06:12:25.042102] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:17:23.332 [2024-08-13 06:12:25.044358] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.332 06:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:17:24.271 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.271 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:24.271 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:24.271 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:24.271 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:24.530 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.530 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.530 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:24.530 "name": "raid_bdev1", 00:17:24.530 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:24.530 "strip_size_kb": 0, 00:17:24.530 "state": "online", 00:17:24.530 "raid_level": "raid1", 00:17:24.530 "superblock": true, 00:17:24.530 "num_base_bdevs": 2, 00:17:24.530 "num_base_bdevs_discovered": 2, 00:17:24.530 "num_base_bdevs_operational": 2, 00:17:24.530 "process": { 00:17:24.530 "type": "rebuild", 00:17:24.530 "target": "spare", 00:17:24.530 "progress": { 00:17:24.530 "blocks": 22528, 00:17:24.530 "percent": 35 00:17:24.530 } 00:17:24.530 }, 00:17:24.530 "base_bdevs_list": [ 00:17:24.530 { 00:17:24.530 "name": "spare", 00:17:24.530 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:24.530 "is_configured": true, 00:17:24.530 "data_offset": 2048, 00:17:24.530 "data_size": 63488 00:17:24.530 }, 00:17:24.530 { 00:17:24.530 "name": "BaseBdev2", 00:17:24.530 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:24.530 "is_configured": true, 00:17:24.530 "data_offset": 2048, 00:17:24.530 "data_size": 63488 00:17:24.530 } 00:17:24.530 ] 00:17:24.530 }' 00:17:24.530 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:24.530 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.530 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:24.791 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.791 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:24.791 [2024-08-13 06:12:26.503897] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.791 [2024-08-13 06:12:26.555286] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.791 [2024-08-13 06:12:26.555370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.791 [2024-08-13 06:12:26.555387] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.791 [2024-08-13 06:12:26.555403] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.050 "name": "raid_bdev1", 00:17:25.050 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:25.050 "strip_size_kb": 0, 00:17:25.050 "state": "online", 00:17:25.050 "raid_level": "raid1", 00:17:25.050 "superblock": true, 00:17:25.050 "num_base_bdevs": 2, 00:17:25.050 "num_base_bdevs_discovered": 1, 00:17:25.050 "num_base_bdevs_operational": 1, 00:17:25.050 "base_bdevs_list": [ 00:17:25.050 { 00:17:25.050 "name": null, 00:17:25.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.050 "is_configured": false, 00:17:25.050 "data_offset": 2048, 00:17:25.050 "data_size": 63488 00:17:25.050 }, 00:17:25.050 { 00:17:25.050 "name": "BaseBdev2", 00:17:25.050 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:25.050 "is_configured": true, 00:17:25.050 "data_offset": 2048, 00:17:25.050 "data_size": 63488 00:17:25.050 } 00:17:25.050 ] 00:17:25.050 }' 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.050 06:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.620 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.880 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.880 "name": "raid_bdev1", 00:17:25.880 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:25.880 "strip_size_kb": 0, 00:17:25.880 "state": "online", 00:17:25.880 "raid_level": "raid1", 00:17:25.880 "superblock": true, 00:17:25.880 "num_base_bdevs": 2, 00:17:25.880 "num_base_bdevs_discovered": 1, 00:17:25.880 "num_base_bdevs_operational": 1, 00:17:25.880 "base_bdevs_list": [ 00:17:25.880 { 00:17:25.880 "name": null, 00:17:25.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.880 "is_configured": false, 00:17:25.880 "data_offset": 2048, 00:17:25.880 "data_size": 63488 00:17:25.880 }, 00:17:25.880 { 00:17:25.880 "name": "BaseBdev2", 00:17:25.880 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:25.880 "is_configured": true, 00:17:25.880 "data_offset": 2048, 00:17:25.880 "data_size": 63488 00:17:25.880 } 00:17:25.880 ] 00:17:25.880 }' 00:17:25.880 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:25.880 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:25.880 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:25.880 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:25.880 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.140 [2024-08-13 06:12:27.838082] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.140 [2024-08-13 06:12:27.845442] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:17:26.140 [2024-08-13 06:12:27.847622] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.140 06:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:17:27.079 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.079 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:27.079 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:27.079 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:27.079 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:27.338 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.338 06:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.338 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:27.338 "name": "raid_bdev1", 00:17:27.338 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:27.338 "strip_size_kb": 0, 00:17:27.338 "state": "online", 00:17:27.338 "raid_level": "raid1", 00:17:27.338 "superblock": true, 00:17:27.338 "num_base_bdevs": 2, 00:17:27.338 "num_base_bdevs_discovered": 2, 00:17:27.338 "num_base_bdevs_operational": 2, 00:17:27.338 "process": { 00:17:27.338 "type": "rebuild", 00:17:27.338 "target": "spare", 00:17:27.338 "progress": { 00:17:27.338 "blocks": 22528, 00:17:27.338 "percent": 35 00:17:27.338 } 00:17:27.338 }, 00:17:27.338 "base_bdevs_list": [ 00:17:27.338 { 00:17:27.338 "name": "spare", 00:17:27.338 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:27.338 "is_configured": true, 00:17:27.338 "data_offset": 2048, 00:17:27.338 "data_size": 63488 00:17:27.338 }, 00:17:27.338 { 00:17:27.338 "name": "BaseBdev2", 00:17:27.338 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:27.338 "is_configured": true, 00:17:27.338 "data_offset": 2048, 00:17:27.338 "data_size": 63488 00:17:27.338 } 00:17:27.338 ] 00:17:27.338 }' 00:17:27.338 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:17:27.597 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=701 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:27.597 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:27.598 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:27.598 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:27.598 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.598 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.598 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:27.598 "name": "raid_bdev1", 00:17:27.598 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:27.598 "strip_size_kb": 0, 00:17:27.598 "state": "online", 00:17:27.598 "raid_level": "raid1", 00:17:27.598 "superblock": true, 00:17:27.598 "num_base_bdevs": 2, 00:17:27.598 "num_base_bdevs_discovered": 2, 00:17:27.598 "num_base_bdevs_operational": 2, 00:17:27.598 "process": { 00:17:27.598 "type": "rebuild", 00:17:27.598 "target": "spare", 00:17:27.598 "progress": { 00:17:27.598 "blocks": 30720, 00:17:27.598 "percent": 48 00:17:27.598 } 00:17:27.598 }, 00:17:27.598 "base_bdevs_list": [ 00:17:27.598 { 00:17:27.598 "name": "spare", 00:17:27.598 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:27.598 "is_configured": true, 00:17:27.598 "data_offset": 2048, 00:17:27.598 "data_size": 63488 00:17:27.598 }, 00:17:27.598 { 00:17:27.598 "name": "BaseBdev2", 00:17:27.598 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:27.598 "is_configured": true, 00:17:27.598 "data_offset": 2048, 00:17:27.598 "data_size": 63488 00:17:27.598 } 00:17:27.598 ] 00:17:27.598 }' 00:17:27.857 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:27.857 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.857 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:27.857 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.857 06:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.796 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.056 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.056 "name": "raid_bdev1", 00:17:29.056 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:29.056 "strip_size_kb": 0, 00:17:29.056 "state": "online", 00:17:29.056 "raid_level": "raid1", 00:17:29.056 "superblock": true, 00:17:29.056 "num_base_bdevs": 2, 00:17:29.056 "num_base_bdevs_discovered": 2, 00:17:29.056 "num_base_bdevs_operational": 2, 00:17:29.056 "process": { 00:17:29.056 "type": "rebuild", 00:17:29.056 "target": "spare", 00:17:29.056 "progress": { 00:17:29.056 "blocks": 55296, 00:17:29.056 "percent": 87 00:17:29.056 } 00:17:29.056 }, 00:17:29.056 "base_bdevs_list": [ 00:17:29.056 { 00:17:29.056 "name": "spare", 00:17:29.056 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:29.056 "is_configured": true, 00:17:29.056 "data_offset": 2048, 00:17:29.056 "data_size": 63488 00:17:29.056 }, 00:17:29.056 { 00:17:29.056 "name": "BaseBdev2", 00:17:29.056 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:29.056 "is_configured": true, 00:17:29.056 "data_offset": 2048, 00:17:29.056 "data_size": 63488 00:17:29.056 } 00:17:29.056 ] 00:17:29.056 }' 00:17:29.056 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:29.056 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.056 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:29.056 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.056 06:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:29.315 [2024-08-13 06:12:30.968415] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:29.315 [2024-08-13 06:12:30.968547] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:29.315 [2024-08-13 06:12:30.968682] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.255 06:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.255 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.255 "name": "raid_bdev1", 00:17:30.255 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:30.255 "strip_size_kb": 0, 00:17:30.255 "state": "online", 00:17:30.255 "raid_level": "raid1", 00:17:30.255 "superblock": true, 00:17:30.255 "num_base_bdevs": 2, 00:17:30.255 "num_base_bdevs_discovered": 2, 00:17:30.255 "num_base_bdevs_operational": 2, 00:17:30.255 "base_bdevs_list": [ 00:17:30.255 { 00:17:30.255 "name": "spare", 00:17:30.255 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:30.255 "is_configured": true, 00:17:30.255 "data_offset": 2048, 00:17:30.255 "data_size": 63488 00:17:30.255 }, 00:17:30.255 { 00:17:30.255 "name": "BaseBdev2", 00:17:30.255 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:30.255 "is_configured": true, 00:17:30.255 "data_offset": 2048, 00:17:30.255 "data_size": 63488 00:17:30.255 } 00:17:30.255 ] 00:17:30.255 }' 00:17:30.255 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.515 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.775 "name": "raid_bdev1", 00:17:30.775 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:30.775 "strip_size_kb": 0, 00:17:30.775 "state": "online", 00:17:30.775 "raid_level": "raid1", 00:17:30.775 "superblock": true, 00:17:30.775 "num_base_bdevs": 2, 00:17:30.775 "num_base_bdevs_discovered": 2, 00:17:30.775 "num_base_bdevs_operational": 2, 00:17:30.775 "base_bdevs_list": [ 00:17:30.775 { 00:17:30.775 "name": "spare", 00:17:30.775 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:30.775 "is_configured": true, 00:17:30.775 "data_offset": 2048, 00:17:30.775 "data_size": 63488 00:17:30.775 }, 00:17:30.775 { 00:17:30.775 "name": "BaseBdev2", 00:17:30.775 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:30.775 "is_configured": true, 00:17:30.775 "data_offset": 2048, 00:17:30.775 "data_size": 63488 00:17:30.775 } 00:17:30.775 ] 00:17:30.775 }' 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.775 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.035 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.035 "name": "raid_bdev1", 00:17:31.035 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:31.035 "strip_size_kb": 0, 00:17:31.035 "state": "online", 00:17:31.035 "raid_level": "raid1", 00:17:31.035 "superblock": true, 00:17:31.035 "num_base_bdevs": 2, 00:17:31.035 "num_base_bdevs_discovered": 2, 00:17:31.035 "num_base_bdevs_operational": 2, 00:17:31.035 "base_bdevs_list": [ 00:17:31.035 { 00:17:31.035 "name": "spare", 00:17:31.035 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:31.035 "is_configured": true, 00:17:31.035 "data_offset": 2048, 00:17:31.035 "data_size": 63488 00:17:31.035 }, 00:17:31.035 { 00:17:31.035 "name": "BaseBdev2", 00:17:31.035 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:31.035 "is_configured": true, 00:17:31.035 "data_offset": 2048, 00:17:31.035 "data_size": 63488 00:17:31.035 } 00:17:31.035 ] 00:17:31.035 }' 00:17:31.035 06:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.035 06:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.605 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:31.606 [2024-08-13 06:12:33.341192] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.606 [2024-08-13 06:12:33.341310] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.606 [2024-08-13 06:12:33.341465] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.606 [2024-08-13 06:12:33.341560] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.606 [2024-08-13 06:12:33.341579] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:31.606 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.606 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.866 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.125 /dev/nbd0 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.125 1+0 records in 00:17:32.125 1+0 records out 00:17:32.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506802 s, 8.1 MB/s 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.125 06:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:17:32.384 /dev/nbd1 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.384 1+0 records in 00:17:32.384 1+0 records out 00:17:32.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538233 s, 7.6 MB/s 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.384 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.644 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:17:32.903 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:17:33.162 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:33.162 [2024-08-13 06:12:34.920318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.162 [2024-08-13 06:12:34.920380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.162 [2024-08-13 06:12:34.920420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:33.162 [2024-08-13 06:12:34.920429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.162 [2024-08-13 06:12:34.922481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.162 [2024-08-13 06:12:34.922578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.162 [2024-08-13 06:12:34.922673] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:33.162 [2024-08-13 06:12:34.922719] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.162 [2024-08-13 06:12:34.922850] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.162 spare 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.421 06:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.421 [2024-08-13 06:12:35.022765] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:17:33.421 [2024-08-13 06:12:35.022873] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:33.421 [2024-08-13 06:12:35.023209] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:17:33.421 [2024-08-13 06:12:35.023442] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:17:33.421 [2024-08-13 06:12:35.023485] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:17:33.421 [2024-08-13 06:12:35.023662] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.421 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.421 "name": "raid_bdev1", 00:17:33.421 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:33.421 "strip_size_kb": 0, 00:17:33.421 "state": "online", 00:17:33.421 "raid_level": "raid1", 00:17:33.421 "superblock": true, 00:17:33.421 "num_base_bdevs": 2, 00:17:33.421 "num_base_bdevs_discovered": 2, 00:17:33.421 "num_base_bdevs_operational": 2, 00:17:33.421 "base_bdevs_list": [ 00:17:33.421 { 00:17:33.421 "name": "spare", 00:17:33.421 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:33.421 "is_configured": true, 00:17:33.421 "data_offset": 2048, 00:17:33.421 "data_size": 63488 00:17:33.421 }, 00:17:33.421 { 00:17:33.421 "name": "BaseBdev2", 00:17:33.421 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:33.421 "is_configured": true, 00:17:33.421 "data_offset": 2048, 00:17:33.421 "data_size": 63488 00:17:33.421 } 00:17:33.421 ] 00:17:33.421 }' 00:17:33.421 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.421 06:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.991 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.250 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.250 "name": "raid_bdev1", 00:17:34.250 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:34.250 "strip_size_kb": 0, 00:17:34.250 "state": "online", 00:17:34.250 "raid_level": "raid1", 00:17:34.250 "superblock": true, 00:17:34.250 "num_base_bdevs": 2, 00:17:34.250 "num_base_bdevs_discovered": 2, 00:17:34.250 "num_base_bdevs_operational": 2, 00:17:34.250 "base_bdevs_list": [ 00:17:34.250 { 00:17:34.250 "name": "spare", 00:17:34.250 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:34.250 "is_configured": true, 00:17:34.250 "data_offset": 2048, 00:17:34.250 "data_size": 63488 00:17:34.250 }, 00:17:34.250 { 00:17:34.250 "name": "BaseBdev2", 00:17:34.250 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:34.250 "is_configured": true, 00:17:34.250 "data_offset": 2048, 00:17:34.250 "data_size": 63488 00:17:34.250 } 00:17:34.250 ] 00:17:34.250 }' 00:17:34.250 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:34.250 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:34.250 06:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:34.250 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:34.250 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.250 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:34.510 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.510 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:34.769 [2024-08-13 06:12:36.358068] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.769 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.029 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.029 "name": "raid_bdev1", 00:17:35.029 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:35.029 "strip_size_kb": 0, 00:17:35.029 "state": "online", 00:17:35.029 "raid_level": "raid1", 00:17:35.029 "superblock": true, 00:17:35.029 "num_base_bdevs": 2, 00:17:35.029 "num_base_bdevs_discovered": 1, 00:17:35.029 "num_base_bdevs_operational": 1, 00:17:35.029 "base_bdevs_list": [ 00:17:35.029 { 00:17:35.029 "name": null, 00:17:35.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.029 "is_configured": false, 00:17:35.029 "data_offset": 2048, 00:17:35.029 "data_size": 63488 00:17:35.029 }, 00:17:35.029 { 00:17:35.029 "name": "BaseBdev2", 00:17:35.029 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:35.029 "is_configured": true, 00:17:35.029 "data_offset": 2048, 00:17:35.029 "data_size": 63488 00:17:35.029 } 00:17:35.029 ] 00:17:35.029 }' 00:17:35.029 06:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.029 06:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.597 06:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.597 [2024-08-13 06:12:37.280558] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.597 [2024-08-13 06:12:37.280834] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.597 [2024-08-13 06:12:37.280898] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.597 [2024-08-13 06:12:37.280967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.597 [2024-08-13 06:12:37.285021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:17:35.597 [2024-08-13 06:12:37.286872] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.597 06:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.550 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.825 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.825 "name": "raid_bdev1", 00:17:36.825 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:36.825 "strip_size_kb": 0, 00:17:36.825 "state": "online", 00:17:36.825 "raid_level": "raid1", 00:17:36.825 "superblock": true, 00:17:36.825 "num_base_bdevs": 2, 00:17:36.825 "num_base_bdevs_discovered": 2, 00:17:36.825 "num_base_bdevs_operational": 2, 00:17:36.825 "process": { 00:17:36.825 "type": "rebuild", 00:17:36.825 "target": "spare", 00:17:36.825 "progress": { 00:17:36.825 "blocks": 22528, 00:17:36.825 "percent": 35 00:17:36.825 } 00:17:36.825 }, 00:17:36.825 "base_bdevs_list": [ 00:17:36.825 { 00:17:36.825 "name": "spare", 00:17:36.825 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:36.825 "is_configured": true, 00:17:36.825 "data_offset": 2048, 00:17:36.825 "data_size": 63488 00:17:36.825 }, 00:17:36.825 { 00:17:36.825 "name": "BaseBdev2", 00:17:36.825 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:36.825 "is_configured": true, 00:17:36.825 "data_offset": 2048, 00:17:36.825 "data_size": 63488 00:17:36.825 } 00:17:36.825 ] 00:17:36.825 }' 00:17:36.825 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:36.825 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.825 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:36.825 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.825 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:17:37.085 [2024-08-13 06:12:38.779360] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.085 [2024-08-13 06:12:38.792974] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.085 [2024-08-13 06:12:38.793043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.085 [2024-08-13 06:12:38.793058] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.085 [2024-08-13 06:12:38.793067] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.085 06:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.345 06:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.345 "name": "raid_bdev1", 00:17:37.345 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:37.345 "strip_size_kb": 0, 00:17:37.345 "state": "online", 00:17:37.345 "raid_level": "raid1", 00:17:37.345 "superblock": true, 00:17:37.345 "num_base_bdevs": 2, 00:17:37.345 "num_base_bdevs_discovered": 1, 00:17:37.345 "num_base_bdevs_operational": 1, 00:17:37.345 "base_bdevs_list": [ 00:17:37.345 { 00:17:37.345 "name": null, 00:17:37.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.345 "is_configured": false, 00:17:37.345 "data_offset": 2048, 00:17:37.345 "data_size": 63488 00:17:37.345 }, 00:17:37.345 { 00:17:37.345 "name": "BaseBdev2", 00:17:37.345 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:37.345 "is_configured": true, 00:17:37.345 "data_offset": 2048, 00:17:37.345 "data_size": 63488 00:17:37.345 } 00:17:37.345 ] 00:17:37.345 }' 00:17:37.345 06:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.345 06:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.913 06:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:38.173 [2024-08-13 06:12:39.727754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.173 [2024-08-13 06:12:39.727896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.173 [2024-08-13 06:12:39.727935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:38.173 [2024-08-13 06:12:39.727965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.173 [2024-08-13 06:12:39.728388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.173 [2024-08-13 06:12:39.728449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.173 [2024-08-13 06:12:39.728560] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:38.173 [2024-08-13 06:12:39.728599] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.173 [2024-08-13 06:12:39.728659] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:38.173 [2024-08-13 06:12:39.728715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.173 [2024-08-13 06:12:39.732761] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:17:38.173 spare 00:17:38.173 [2024-08-13 06:12:39.734548] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.173 06:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.111 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.370 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.370 "name": "raid_bdev1", 00:17:39.370 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:39.370 "strip_size_kb": 0, 00:17:39.370 "state": "online", 00:17:39.370 "raid_level": "raid1", 00:17:39.370 "superblock": true, 00:17:39.370 "num_base_bdevs": 2, 00:17:39.370 "num_base_bdevs_discovered": 2, 00:17:39.370 "num_base_bdevs_operational": 2, 00:17:39.370 "process": { 00:17:39.370 "type": "rebuild", 00:17:39.370 "target": "spare", 00:17:39.370 "progress": { 00:17:39.370 "blocks": 24576, 00:17:39.370 "percent": 38 00:17:39.370 } 00:17:39.370 }, 00:17:39.370 "base_bdevs_list": [ 00:17:39.370 { 00:17:39.370 "name": "spare", 00:17:39.370 "uuid": "bacfe41a-4f78-557b-99dc-a95aff75e7b5", 00:17:39.370 "is_configured": true, 00:17:39.370 "data_offset": 2048, 00:17:39.370 "data_size": 63488 00:17:39.370 }, 00:17:39.370 { 00:17:39.370 "name": "BaseBdev2", 00:17:39.370 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:39.370 "is_configured": true, 00:17:39.370 "data_offset": 2048, 00:17:39.370 "data_size": 63488 00:17:39.370 } 00:17:39.370 ] 00:17:39.370 }' 00:17:39.370 06:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:39.370 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.370 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:39.370 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.370 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:17:39.629 [2024-08-13 06:12:41.254528] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.629 [2024-08-13 06:12:41.339788] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.629 [2024-08-13 06:12:41.339890] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.629 [2024-08-13 06:12:41.339923] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.629 [2024-08-13 06:12:41.339942] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.629 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.889 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.889 "name": "raid_bdev1", 00:17:39.889 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:39.889 "strip_size_kb": 0, 00:17:39.889 "state": "online", 00:17:39.889 "raid_level": "raid1", 00:17:39.889 "superblock": true, 00:17:39.889 "num_base_bdevs": 2, 00:17:39.889 "num_base_bdevs_discovered": 1, 00:17:39.889 "num_base_bdevs_operational": 1, 00:17:39.889 "base_bdevs_list": [ 00:17:39.889 { 00:17:39.889 "name": null, 00:17:39.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.889 "is_configured": false, 00:17:39.889 "data_offset": 2048, 00:17:39.889 "data_size": 63488 00:17:39.889 }, 00:17:39.889 { 00:17:39.889 "name": "BaseBdev2", 00:17:39.889 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:39.889 "is_configured": true, 00:17:39.889 "data_offset": 2048, 00:17:39.889 "data_size": 63488 00:17:39.889 } 00:17:39.889 ] 00:17:39.889 }' 00:17:39.889 06:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.889 06:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.457 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.716 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:40.716 "name": "raid_bdev1", 00:17:40.716 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:40.716 "strip_size_kb": 0, 00:17:40.716 "state": "online", 00:17:40.716 "raid_level": "raid1", 00:17:40.716 "superblock": true, 00:17:40.716 "num_base_bdevs": 2, 00:17:40.716 "num_base_bdevs_discovered": 1, 00:17:40.716 "num_base_bdevs_operational": 1, 00:17:40.716 "base_bdevs_list": [ 00:17:40.716 { 00:17:40.716 "name": null, 00:17:40.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.716 "is_configured": false, 00:17:40.716 "data_offset": 2048, 00:17:40.716 "data_size": 63488 00:17:40.716 }, 00:17:40.716 { 00:17:40.716 "name": "BaseBdev2", 00:17:40.716 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:40.716 "is_configured": true, 00:17:40.716 "data_offset": 2048, 00:17:40.716 "data_size": 63488 00:17:40.716 } 00:17:40.716 ] 00:17:40.716 }' 00:17:40.716 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:40.716 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:40.716 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:40.716 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:40.716 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:17:40.975 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:41.235 [2024-08-13 06:12:42.809520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:41.235 [2024-08-13 06:12:42.809578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.235 [2024-08-13 06:12:42.809600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:41.235 [2024-08-13 06:12:42.809609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.235 [2024-08-13 06:12:42.809973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.235 [2024-08-13 06:12:42.809989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.235 [2024-08-13 06:12:42.810074] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:41.235 [2024-08-13 06:12:42.810088] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:41.235 [2024-08-13 06:12:42.810102] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:41.235 BaseBdev1 00:17:41.235 06:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.173 06:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.431 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.431 "name": "raid_bdev1", 00:17:42.431 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:42.431 "strip_size_kb": 0, 00:17:42.431 "state": "online", 00:17:42.431 "raid_level": "raid1", 00:17:42.431 "superblock": true, 00:17:42.431 "num_base_bdevs": 2, 00:17:42.431 "num_base_bdevs_discovered": 1, 00:17:42.431 "num_base_bdevs_operational": 1, 00:17:42.431 "base_bdevs_list": [ 00:17:42.431 { 00:17:42.431 "name": null, 00:17:42.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.431 "is_configured": false, 00:17:42.431 "data_offset": 2048, 00:17:42.431 "data_size": 63488 00:17:42.431 }, 00:17:42.431 { 00:17:42.431 "name": "BaseBdev2", 00:17:42.431 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:42.431 "is_configured": true, 00:17:42.431 "data_offset": 2048, 00:17:42.431 "data_size": 63488 00:17:42.431 } 00:17:42.431 ] 00:17:42.431 }' 00:17:42.431 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.431 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.999 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.000 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:43.000 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:43.000 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:43.000 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:43.000 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.000 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.259 "name": "raid_bdev1", 00:17:43.259 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:43.259 "strip_size_kb": 0, 00:17:43.259 "state": "online", 00:17:43.259 "raid_level": "raid1", 00:17:43.259 "superblock": true, 00:17:43.259 "num_base_bdevs": 2, 00:17:43.259 "num_base_bdevs_discovered": 1, 00:17:43.259 "num_base_bdevs_operational": 1, 00:17:43.259 "base_bdevs_list": [ 00:17:43.259 { 00:17:43.259 "name": null, 00:17:43.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.259 "is_configured": false, 00:17:43.259 "data_offset": 2048, 00:17:43.259 "data_size": 63488 00:17:43.259 }, 00:17:43.259 { 00:17:43.259 "name": "BaseBdev2", 00:17:43.259 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:43.259 "is_configured": true, 00:17:43.259 "data_offset": 2048, 00:17:43.259 "data_size": 63488 00:17:43.259 } 00:17:43.259 ] 00:17:43.259 }' 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:43.259 06:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:43.518 [2024-08-13 06:12:45.053796] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.518 [2024-08-13 06:12:45.054039] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:43.518 [2024-08-13 06:12:45.054112] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:43.518 request: 00:17:43.518 { 00:17:43.518 "base_bdev": "BaseBdev1", 00:17:43.518 "raid_bdev": "raid_bdev1", 00:17:43.518 "method": "bdev_raid_add_base_bdev", 00:17:43.518 "req_id": 1 00:17:43.518 } 00:17:43.518 Got JSON-RPC error response 00:17:43.518 response: 00:17:43.518 { 00:17:43.518 "code": -22, 00:17:43.518 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:43.518 } 00:17:43.518 06:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:17:43.518 06:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:17:43.518 06:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:17:43.518 06:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:17:43.518 06:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.454 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.712 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.712 "name": "raid_bdev1", 00:17:44.712 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:44.712 "strip_size_kb": 0, 00:17:44.712 "state": "online", 00:17:44.712 "raid_level": "raid1", 00:17:44.712 "superblock": true, 00:17:44.712 "num_base_bdevs": 2, 00:17:44.712 "num_base_bdevs_discovered": 1, 00:17:44.712 "num_base_bdevs_operational": 1, 00:17:44.712 "base_bdevs_list": [ 00:17:44.712 { 00:17:44.712 "name": null, 00:17:44.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.712 "is_configured": false, 00:17:44.712 "data_offset": 2048, 00:17:44.712 "data_size": 63488 00:17:44.712 }, 00:17:44.712 { 00:17:44.712 "name": "BaseBdev2", 00:17:44.712 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:44.712 "is_configured": true, 00:17:44.712 "data_offset": 2048, 00:17:44.712 "data_size": 63488 00:17:44.712 } 00:17:44.712 ] 00:17:44.712 }' 00:17:44.712 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.712 06:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.279 06:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.279 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:45.279 "name": "raid_bdev1", 00:17:45.279 "uuid": "8de25fe2-8014-40c7-a4dc-93f6b41e78a7", 00:17:45.279 "strip_size_kb": 0, 00:17:45.279 "state": "online", 00:17:45.279 "raid_level": "raid1", 00:17:45.279 "superblock": true, 00:17:45.279 "num_base_bdevs": 2, 00:17:45.279 "num_base_bdevs_discovered": 1, 00:17:45.279 "num_base_bdevs_operational": 1, 00:17:45.279 "base_bdevs_list": [ 00:17:45.279 { 00:17:45.279 "name": null, 00:17:45.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.279 "is_configured": false, 00:17:45.279 "data_offset": 2048, 00:17:45.279 "data_size": 63488 00:17:45.279 }, 00:17:45.279 { 00:17:45.279 "name": "BaseBdev2", 00:17:45.279 "uuid": "bf733c16-4741-5385-8f4b-8026b9dfb2e3", 00:17:45.279 "is_configured": true, 00:17:45.279 "data_offset": 2048, 00:17:45.279 "data_size": 63488 00:17:45.279 } 00:17:45.279 ] 00:17:45.279 }' 00:17:45.279 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:45.279 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:45.279 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 92554 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 92554 ']' 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 92554 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92554 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:45.538 killing process with pid 92554 00:17:45.538 Received shutdown signal, test time was about 60.000000 seconds 00:17:45.538 00:17:45.538 Latency(us) 00:17:45.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.538 =================================================================================================================== 00:17:45.538 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92554' 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 92554 00:17:45.538 [2024-08-13 06:12:47.120317] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.538 [2024-08-13 06:12:47.120433] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.538 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 92554 00:17:45.538 [2024-08-13 06:12:47.120485] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.538 [2024-08-13 06:12:47.120495] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:45.538 [2024-08-13 06:12:47.151910] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:17:45.797 00:17:45.797 real 0m31.927s 00:17:45.797 user 0m46.438s 00:17:45.797 sys 0m5.328s 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.797 ************************************ 00:17:45.797 END TEST raid_rebuild_test_sb 00:17:45.797 ************************************ 00:17:45.797 06:12:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:17:45.797 06:12:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:17:45.797 06:12:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:45.797 06:12:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.797 ************************************ 00:17:45.797 START TEST raid_rebuild_test_io 00:17:45.797 ************************************ 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false true true 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=93392 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 93392 /var/tmp/spdk-raid.sock 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 93392 ']' 00:17:45.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.797 06:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.797 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:45.797 Zero copy mechanism will not be used. 00:17:45.797 [2024-08-13 06:12:47.572513] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:17:45.797 [2024-08-13 06:12:47.572675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93392 ] 00:17:46.056 [2024-08-13 06:12:47.721162] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.056 [2024-08-13 06:12:47.768422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.056 [2024-08-13 06:12:47.811398] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.056 [2024-08-13 06:12:47.811518] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.623 06:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.623 06:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:17:46.623 06:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:46.623 06:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:46.884 BaseBdev1_malloc 00:17:46.884 06:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:47.147 [2024-08-13 06:12:48.715978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:47.147 [2024-08-13 06:12:48.716055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.147 [2024-08-13 06:12:48.716080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:47.147 [2024-08-13 06:12:48.716092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.147 [2024-08-13 06:12:48.718199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.147 [2024-08-13 06:12:48.718246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:47.147 BaseBdev1 00:17:47.147 06:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:17:47.147 06:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:47.147 BaseBdev2_malloc 00:17:47.406 06:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:47.406 [2024-08-13 06:12:49.111842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:47.406 [2024-08-13 06:12:49.111907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.406 [2024-08-13 06:12:49.111927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.406 [2024-08-13 06:12:49.111937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.406 [2024-08-13 06:12:49.113892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.406 [2024-08-13 06:12:49.113988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:47.407 BaseBdev2 00:17:47.407 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:17:47.664 spare_malloc 00:17:47.664 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:47.922 spare_delay 00:17:47.922 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:48.182 [2024-08-13 06:12:49.759053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.182 [2024-08-13 06:12:49.759108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.182 [2024-08-13 06:12:49.759126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:48.182 [2024-08-13 06:12:49.759137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.182 [2024-08-13 06:12:49.761176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.182 [2024-08-13 06:12:49.761265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.182 spare 00:17:48.182 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:48.182 [2024-08-13 06:12:49.954735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.182 [2024-08-13 06:12:49.956472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.182 [2024-08-13 06:12:49.956572] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:48.182 [2024-08-13 06:12:49.956587] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:48.182 [2024-08-13 06:12:49.956844] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:48.182 [2024-08-13 06:12:49.956982] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:48.182 [2024-08-13 06:12:49.956992] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:48.182 [2024-08-13 06:12:49.957123] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.440 06:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.440 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.440 "name": "raid_bdev1", 00:17:48.440 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:48.440 "strip_size_kb": 0, 00:17:48.441 "state": "online", 00:17:48.441 "raid_level": "raid1", 00:17:48.441 "superblock": false, 00:17:48.441 "num_base_bdevs": 2, 00:17:48.441 "num_base_bdevs_discovered": 2, 00:17:48.441 "num_base_bdevs_operational": 2, 00:17:48.441 "base_bdevs_list": [ 00:17:48.441 { 00:17:48.441 "name": "BaseBdev1", 00:17:48.441 "uuid": "2a093205-15bc-5238-b8a3-d7dd44719b4d", 00:17:48.441 "is_configured": true, 00:17:48.441 "data_offset": 0, 00:17:48.441 "data_size": 65536 00:17:48.441 }, 00:17:48.441 { 00:17:48.441 "name": "BaseBdev2", 00:17:48.441 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:48.441 "is_configured": true, 00:17:48.441 "data_offset": 0, 00:17:48.441 "data_size": 65536 00:17:48.441 } 00:17:48.441 ] 00:17:48.441 }' 00:17:48.441 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.441 06:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.006 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:17:49.006 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.263 [2024-08-13 06:12:50.897480] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.263 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:17:49.263 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.263 06:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:49.521 [2024-08-13 06:12:51.186727] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:49.521 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:49.521 Zero copy mechanism will not be used. 00:17:49.521 Running I/O for 60 seconds... 00:17:49.521 [2024-08-13 06:12:51.259849] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.521 [2024-08-13 06:12:51.265099] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.521 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.779 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.779 "name": "raid_bdev1", 00:17:49.779 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:49.779 "strip_size_kb": 0, 00:17:49.779 "state": "online", 00:17:49.779 "raid_level": "raid1", 00:17:49.779 "superblock": false, 00:17:49.779 "num_base_bdevs": 2, 00:17:49.779 "num_base_bdevs_discovered": 1, 00:17:49.779 "num_base_bdevs_operational": 1, 00:17:49.779 "base_bdevs_list": [ 00:17:49.779 { 00:17:49.779 "name": null, 00:17:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.779 "is_configured": false, 00:17:49.779 "data_offset": 0, 00:17:49.779 "data_size": 65536 00:17:49.779 }, 00:17:49.779 { 00:17:49.779 "name": "BaseBdev2", 00:17:49.779 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:49.779 "is_configured": true, 00:17:49.779 "data_offset": 0, 00:17:49.779 "data_size": 65536 00:17:49.779 } 00:17:49.779 ] 00:17:49.779 }' 00:17:49.779 06:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.779 06:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.346 06:12:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.604 [2024-08-13 06:12:52.268572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.604 [2024-08-13 06:12:52.297688] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:17:50.604 [2024-08-13 06:12:52.299479] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.604 06:12:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:17:50.862 [2024-08-13 06:12:52.402513] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:50.862 [2024-08-13 06:12:52.402970] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:50.862 [2024-08-13 06:12:52.626167] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:50.862 [2024-08-13 06:12:52.626440] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:51.429 [2024-08-13 06:12:52.950708] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:51.429 [2024-08-13 06:12:53.063585] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:51.429 [2024-08-13 06:12:53.063892] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.688 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.688 [2024-08-13 06:12:53.389359] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:51.688 [2024-08-13 06:12:53.389777] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:51.946 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.946 "name": "raid_bdev1", 00:17:51.946 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:51.946 "strip_size_kb": 0, 00:17:51.946 "state": "online", 00:17:51.946 "raid_level": "raid1", 00:17:51.946 "superblock": false, 00:17:51.946 "num_base_bdevs": 2, 00:17:51.946 "num_base_bdevs_discovered": 2, 00:17:51.946 "num_base_bdevs_operational": 2, 00:17:51.946 "process": { 00:17:51.946 "type": "rebuild", 00:17:51.946 "target": "spare", 00:17:51.946 "progress": { 00:17:51.946 "blocks": 14336, 00:17:51.946 "percent": 21 00:17:51.946 } 00:17:51.946 }, 00:17:51.946 "base_bdevs_list": [ 00:17:51.946 { 00:17:51.946 "name": "spare", 00:17:51.946 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:51.946 "is_configured": true, 00:17:51.946 "data_offset": 0, 00:17:51.946 "data_size": 65536 00:17:51.946 }, 00:17:51.946 { 00:17:51.946 "name": "BaseBdev2", 00:17:51.946 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:51.946 "is_configured": true, 00:17:51.946 "data_offset": 0, 00:17:51.946 "data_size": 65536 00:17:51.946 } 00:17:51.946 ] 00:17:51.946 }' 00:17:51.946 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:51.946 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.946 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:51.946 [2024-08-13 06:12:53.597660] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:51.946 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.946 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:52.205 [2024-08-13 06:12:53.792954] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.205 [2024-08-13 06:12:53.808096] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.205 [2024-08-13 06:12:53.814942] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.205 [2024-08-13 06:12:53.815034] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.205 [2024-08-13 06:12:53.815061] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.206 [2024-08-13 06:12:53.830745] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.206 06:12:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.465 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.465 "name": "raid_bdev1", 00:17:52.465 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:52.465 "strip_size_kb": 0, 00:17:52.465 "state": "online", 00:17:52.465 "raid_level": "raid1", 00:17:52.465 "superblock": false, 00:17:52.465 "num_base_bdevs": 2, 00:17:52.465 "num_base_bdevs_discovered": 1, 00:17:52.465 "num_base_bdevs_operational": 1, 00:17:52.465 "base_bdevs_list": [ 00:17:52.465 { 00:17:52.465 "name": null, 00:17:52.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.465 "is_configured": false, 00:17:52.465 "data_offset": 0, 00:17:52.465 "data_size": 65536 00:17:52.465 }, 00:17:52.465 { 00:17:52.465 "name": "BaseBdev2", 00:17:52.465 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:52.465 "is_configured": true, 00:17:52.465 "data_offset": 0, 00:17:52.465 "data_size": 65536 00:17:52.465 } 00:17:52.465 ] 00:17:52.465 }' 00:17:52.465 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.465 06:12:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.033 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.292 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.292 "name": "raid_bdev1", 00:17:53.292 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:53.292 "strip_size_kb": 0, 00:17:53.292 "state": "online", 00:17:53.292 "raid_level": "raid1", 00:17:53.292 "superblock": false, 00:17:53.292 "num_base_bdevs": 2, 00:17:53.292 "num_base_bdevs_discovered": 1, 00:17:53.292 "num_base_bdevs_operational": 1, 00:17:53.292 "base_bdevs_list": [ 00:17:53.292 { 00:17:53.292 "name": null, 00:17:53.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.292 "is_configured": false, 00:17:53.292 "data_offset": 0, 00:17:53.292 "data_size": 65536 00:17:53.292 }, 00:17:53.292 { 00:17:53.292 "name": "BaseBdev2", 00:17:53.292 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:53.292 "is_configured": true, 00:17:53.292 "data_offset": 0, 00:17:53.292 "data_size": 65536 00:17:53.292 } 00:17:53.292 ] 00:17:53.292 }' 00:17:53.292 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:53.292 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:53.292 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:53.293 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:53.293 06:12:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.552 [2024-08-13 06:12:55.096923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.552 [2024-08-13 06:12:55.130691] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:17:53.552 [2024-08-13 06:12:55.132461] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.552 06:12:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:17:53.552 [2024-08-13 06:12:55.239805] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:53.552 [2024-08-13 06:12:55.240275] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:53.811 [2024-08-13 06:12:55.452396] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:53.811 [2024-08-13 06:12:55.452660] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:54.071 [2024-08-13 06:12:55.774172] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:54.071 [2024-08-13 06:12:55.774585] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:54.330 [2024-08-13 06:12:55.992114] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.590 "name": "raid_bdev1", 00:17:54.590 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:54.590 "strip_size_kb": 0, 00:17:54.590 "state": "online", 00:17:54.590 "raid_level": "raid1", 00:17:54.590 "superblock": false, 00:17:54.590 "num_base_bdevs": 2, 00:17:54.590 "num_base_bdevs_discovered": 2, 00:17:54.590 "num_base_bdevs_operational": 2, 00:17:54.590 "process": { 00:17:54.590 "type": "rebuild", 00:17:54.590 "target": "spare", 00:17:54.590 "progress": { 00:17:54.590 "blocks": 14336, 00:17:54.590 "percent": 21 00:17:54.590 } 00:17:54.590 }, 00:17:54.590 "base_bdevs_list": [ 00:17:54.590 { 00:17:54.590 "name": "spare", 00:17:54.590 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:54.590 "is_configured": true, 00:17:54.590 "data_offset": 0, 00:17:54.590 "data_size": 65536 00:17:54.590 }, 00:17:54.590 { 00:17:54.590 "name": "BaseBdev2", 00:17:54.590 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:54.590 "is_configured": true, 00:17:54.590 "data_offset": 0, 00:17:54.590 "data_size": 65536 00:17:54.590 } 00:17:54.590 ] 00:17:54.590 }' 00:17:54.590 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=728 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:54.849 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.850 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:54.850 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:54.850 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:54.850 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:54.850 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.850 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.109 [2024-08-13 06:12:56.642404] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:55.109 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.109 "name": "raid_bdev1", 00:17:55.109 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:55.109 "strip_size_kb": 0, 00:17:55.109 "state": "online", 00:17:55.109 "raid_level": "raid1", 00:17:55.109 "superblock": false, 00:17:55.109 "num_base_bdevs": 2, 00:17:55.109 "num_base_bdevs_discovered": 2, 00:17:55.109 "num_base_bdevs_operational": 2, 00:17:55.109 "process": { 00:17:55.109 "type": "rebuild", 00:17:55.109 "target": "spare", 00:17:55.109 "progress": { 00:17:55.109 "blocks": 18432, 00:17:55.109 "percent": 28 00:17:55.109 } 00:17:55.109 }, 00:17:55.109 "base_bdevs_list": [ 00:17:55.109 { 00:17:55.109 "name": "spare", 00:17:55.109 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:55.109 "is_configured": true, 00:17:55.109 "data_offset": 0, 00:17:55.109 "data_size": 65536 00:17:55.109 }, 00:17:55.109 { 00:17:55.109 "name": "BaseBdev2", 00:17:55.109 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:55.109 "is_configured": true, 00:17:55.109 "data_offset": 0, 00:17:55.109 "data_size": 65536 00:17:55.109 } 00:17:55.109 ] 00:17:55.109 }' 00:17:55.109 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:55.109 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.109 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:55.109 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.109 06:12:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:55.109 [2024-08-13 06:12:56.760322] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:55.109 [2024-08-13 06:12:56.760625] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:55.368 [2024-08-13 06:12:57.069041] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:55.937 [2024-08-13 06:12:57.516370] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:56.197 [2024-08-13 06:12:57.735678] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.197 "name": "raid_bdev1", 00:17:56.197 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:56.197 "strip_size_kb": 0, 00:17:56.197 "state": "online", 00:17:56.197 "raid_level": "raid1", 00:17:56.197 "superblock": false, 00:17:56.197 "num_base_bdevs": 2, 00:17:56.197 "num_base_bdevs_discovered": 2, 00:17:56.197 "num_base_bdevs_operational": 2, 00:17:56.197 "process": { 00:17:56.197 "type": "rebuild", 00:17:56.197 "target": "spare", 00:17:56.197 "progress": { 00:17:56.197 "blocks": 34816, 00:17:56.197 "percent": 53 00:17:56.197 } 00:17:56.197 }, 00:17:56.197 "base_bdevs_list": [ 00:17:56.197 { 00:17:56.197 "name": "spare", 00:17:56.197 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:56.197 "is_configured": true, 00:17:56.197 "data_offset": 0, 00:17:56.197 "data_size": 65536 00:17:56.197 }, 00:17:56.197 { 00:17:56.197 "name": "BaseBdev2", 00:17:56.197 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:56.197 "is_configured": true, 00:17:56.197 "data_offset": 0, 00:17:56.197 "data_size": 65536 00:17:56.197 } 00:17:56.197 ] 00:17:56.197 }' 00:17:56.197 06:12:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:56.456 06:12:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.456 06:12:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:56.456 [2024-08-13 06:12:58.044057] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:56.456 06:12:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.456 06:12:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:56.716 [2024-08-13 06:12:58.255959] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:57.284 [2024-08-13 06:12:58.860300] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:57.284 [2024-08-13 06:12:58.860644] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:57.284 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:57.284 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.284 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:57.284 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:57.284 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:57.284 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:57.543 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.543 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.543 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.543 "name": "raid_bdev1", 00:17:57.543 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:57.543 "strip_size_kb": 0, 00:17:57.543 "state": "online", 00:17:57.543 "raid_level": "raid1", 00:17:57.543 "superblock": false, 00:17:57.543 "num_base_bdevs": 2, 00:17:57.543 "num_base_bdevs_discovered": 2, 00:17:57.543 "num_base_bdevs_operational": 2, 00:17:57.543 "process": { 00:17:57.543 "type": "rebuild", 00:17:57.543 "target": "spare", 00:17:57.543 "progress": { 00:17:57.543 "blocks": 55296, 00:17:57.543 "percent": 84 00:17:57.543 } 00:17:57.543 }, 00:17:57.543 "base_bdevs_list": [ 00:17:57.543 { 00:17:57.543 "name": "spare", 00:17:57.543 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:57.543 "is_configured": true, 00:17:57.543 "data_offset": 0, 00:17:57.543 "data_size": 65536 00:17:57.543 }, 00:17:57.543 { 00:17:57.543 "name": "BaseBdev2", 00:17:57.543 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:57.543 "is_configured": true, 00:17:57.543 "data_offset": 0, 00:17:57.543 "data_size": 65536 00:17:57.543 } 00:17:57.543 ] 00:17:57.543 }' 00:17:57.543 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:57.543 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.543 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:57.802 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.802 06:12:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:17:57.802 [2024-08-13 06:12:59.391517] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:17:58.073 [2024-08-13 06:12:59.824064] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:58.356 [2024-08-13 06:12:59.928733] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:58.356 [2024-08-13 06:12:59.930922] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.632 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.892 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.892 "name": "raid_bdev1", 00:17:58.892 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:58.892 "strip_size_kb": 0, 00:17:58.892 "state": "online", 00:17:58.892 "raid_level": "raid1", 00:17:58.892 "superblock": false, 00:17:58.892 "num_base_bdevs": 2, 00:17:58.892 "num_base_bdevs_discovered": 2, 00:17:58.892 "num_base_bdevs_operational": 2, 00:17:58.892 "base_bdevs_list": [ 00:17:58.892 { 00:17:58.892 "name": "spare", 00:17:58.892 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:58.892 "is_configured": true, 00:17:58.892 "data_offset": 0, 00:17:58.892 "data_size": 65536 00:17:58.892 }, 00:17:58.892 { 00:17:58.892 "name": "BaseBdev2", 00:17:58.892 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:58.892 "is_configured": true, 00:17:58.892 "data_offset": 0, 00:17:58.892 "data_size": 65536 00:17:58.892 } 00:17:58.892 ] 00:17:58.892 }' 00:17:58.892 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:58.892 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:58.892 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:59.151 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:17:59.151 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:17:59.151 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.151 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:59.151 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.152 "name": "raid_bdev1", 00:17:59.152 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:59.152 "strip_size_kb": 0, 00:17:59.152 "state": "online", 00:17:59.152 "raid_level": "raid1", 00:17:59.152 "superblock": false, 00:17:59.152 "num_base_bdevs": 2, 00:17:59.152 "num_base_bdevs_discovered": 2, 00:17:59.152 "num_base_bdevs_operational": 2, 00:17:59.152 "base_bdevs_list": [ 00:17:59.152 { 00:17:59.152 "name": "spare", 00:17:59.152 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:59.152 "is_configured": true, 00:17:59.152 "data_offset": 0, 00:17:59.152 "data_size": 65536 00:17:59.152 }, 00:17:59.152 { 00:17:59.152 "name": "BaseBdev2", 00:17:59.152 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:59.152 "is_configured": true, 00:17:59.152 "data_offset": 0, 00:17:59.152 "data_size": 65536 00:17:59.152 } 00:17:59.152 ] 00:17:59.152 }' 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:59.152 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.411 06:13:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.411 06:13:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.411 "name": "raid_bdev1", 00:17:59.411 "uuid": "e2a63924-3c15-42eb-ab6f-d8c723b0710d", 00:17:59.411 "strip_size_kb": 0, 00:17:59.411 "state": "online", 00:17:59.411 "raid_level": "raid1", 00:17:59.411 "superblock": false, 00:17:59.411 "num_base_bdevs": 2, 00:17:59.411 "num_base_bdevs_discovered": 2, 00:17:59.411 "num_base_bdevs_operational": 2, 00:17:59.411 "base_bdevs_list": [ 00:17:59.411 { 00:17:59.411 "name": "spare", 00:17:59.411 "uuid": "0a019263-f4a4-590f-a745-102df935ab58", 00:17:59.411 "is_configured": true, 00:17:59.411 "data_offset": 0, 00:17:59.411 "data_size": 65536 00:17:59.411 }, 00:17:59.411 { 00:17:59.411 "name": "BaseBdev2", 00:17:59.411 "uuid": "9937ffca-1854-526f-b6a5-147591d1fc06", 00:17:59.411 "is_configured": true, 00:17:59.411 "data_offset": 0, 00:17:59.412 "data_size": 65536 00:17:59.412 } 00:17:59.412 ] 00:17:59.412 }' 00:17:59.412 06:13:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.412 06:13:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.980 06:13:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:00.240 [2024-08-13 06:13:01.885246] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.240 [2024-08-13 06:13:01.885288] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.240 00:18:00.240 Latency(us) 00:18:00.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.240 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:00.240 raid_bdev1 : 10.78 124.48 373.43 0.00 0.00 11024.65 268.30 112641.79 00:18:00.240 =================================================================================================================== 00:18:00.240 Total : 124.48 373.43 0.00 0.00 11024.65 268.30 112641.79 00:18:00.240 [2024-08-13 06:13:01.951985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.240 [2024-08-13 06:13:01.952024] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.240 [2024-08-13 06:13:01.952101] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.240 [2024-08-13 06:13:01.952113] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:18:00.240 0 00:18:00.240 06:13:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.240 06:13:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.499 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:00.500 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.500 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.500 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:18:00.759 /dev/nbd0 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.759 1+0 records in 00:18:00.759 1+0 records out 00:18:00.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566289 s, 7.2 MB/s 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.759 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:01.019 /dev/nbd1 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.019 1+0 records in 00:18:01.019 1+0 records out 00:18:01.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371068 s, 11.0 MB/s 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.019 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:01.278 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.279 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:01.279 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.279 06:13:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 93392 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 93392 ']' 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 93392 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93392 00:18:01.538 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:01.539 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:01.539 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93392' 00:18:01.539 killing process with pid 93392 00:18:01.539 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 93392 00:18:01.539 Received shutdown signal, test time was about 12.048961 seconds 00:18:01.539 00:18:01.539 Latency(us) 00:18:01.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.539 =================================================================================================================== 00:18:01.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.539 [2024-08-13 06:13:03.214497] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.539 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 93392 00:18:01.539 [2024-08-13 06:13:03.240280] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:18:01.799 00:18:01.799 real 0m16.009s 00:18:01.799 user 0m24.122s 00:18:01.799 sys 0m2.325s 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.799 ************************************ 00:18:01.799 END TEST raid_rebuild_test_io 00:18:01.799 ************************************ 00:18:01.799 06:13:03 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:01.799 06:13:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:01.799 06:13:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:01.799 06:13:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.799 ************************************ 00:18:01.799 START TEST raid_rebuild_test_sb_io 00:18:01.799 ************************************ 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true true true 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=93818 00:18:01.799 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 93818 /var/tmp/spdk-raid.sock 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 93818 ']' 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:01.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.800 06:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.062 [2024-08-13 06:13:03.658968] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:18:02.062 [2024-08-13 06:13:03.659186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:02.062 Zero copy mechanism will not be used. 00:18:02.062 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93818 ] 00:18:02.062 [2024-08-13 06:13:03.801714] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.062 [2024-08-13 06:13:03.848489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.322 [2024-08-13 06:13:03.891502] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.322 [2024-08-13 06:13:03.891614] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.890 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.890 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:18:02.890 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:02.890 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:02.890 BaseBdev1_malloc 00:18:02.890 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.149 [2024-08-13 06:13:04.844122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.149 [2024-08-13 06:13:04.844263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.149 [2024-08-13 06:13:04.844302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:03.149 [2024-08-13 06:13:04.844342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.149 [2024-08-13 06:13:04.846409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.149 [2024-08-13 06:13:04.846492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.149 BaseBdev1 00:18:03.149 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:03.149 06:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:03.408 BaseBdev2_malloc 00:18:03.408 06:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:03.668 [2024-08-13 06:13:05.240032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:03.668 [2024-08-13 06:13:05.240178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.668 [2024-08-13 06:13:05.240214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.668 [2024-08-13 06:13:05.240243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.668 [2024-08-13 06:13:05.242214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.668 [2024-08-13 06:13:05.242293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:03.668 BaseBdev2 00:18:03.668 06:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:03.927 spare_malloc 00:18:03.927 06:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:03.927 spare_delay 00:18:03.927 06:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:04.186 [2024-08-13 06:13:05.858512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.186 [2024-08-13 06:13:05.858627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.186 [2024-08-13 06:13:05.858665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:04.186 [2024-08-13 06:13:05.858696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.186 [2024-08-13 06:13:05.860655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.187 [2024-08-13 06:13:05.860728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.187 spare 00:18:04.187 06:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:04.446 [2024-08-13 06:13:06.062206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.446 [2024-08-13 06:13:06.063937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.446 [2024-08-13 06:13:06.064102] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:18:04.446 [2024-08-13 06:13:06.064120] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:04.446 [2024-08-13 06:13:06.064349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:04.446 [2024-08-13 06:13:06.064481] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:18:04.446 [2024-08-13 06:13:06.064490] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:18:04.446 [2024-08-13 06:13:06.064605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.446 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.705 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.705 "name": "raid_bdev1", 00:18:04.705 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:04.705 "strip_size_kb": 0, 00:18:04.705 "state": "online", 00:18:04.705 "raid_level": "raid1", 00:18:04.705 "superblock": true, 00:18:04.705 "num_base_bdevs": 2, 00:18:04.705 "num_base_bdevs_discovered": 2, 00:18:04.705 "num_base_bdevs_operational": 2, 00:18:04.705 "base_bdevs_list": [ 00:18:04.705 { 00:18:04.705 "name": "BaseBdev1", 00:18:04.705 "uuid": "48564f3c-d9f8-5811-a5da-1028168355a0", 00:18:04.705 "is_configured": true, 00:18:04.706 "data_offset": 2048, 00:18:04.706 "data_size": 63488 00:18:04.706 }, 00:18:04.706 { 00:18:04.706 "name": "BaseBdev2", 00:18:04.706 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:04.706 "is_configured": true, 00:18:04.706 "data_offset": 2048, 00:18:04.706 "data_size": 63488 00:18:04.706 } 00:18:04.706 ] 00:18:04.706 }' 00:18:04.706 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.706 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.274 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:05.274 06:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:18:05.274 [2024-08-13 06:13:07.008914] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.274 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:18:05.274 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:05.274 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.533 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:18:05.533 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:18:05.533 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:05.533 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:05.793 [2024-08-13 06:13:07.338136] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:18:05.793 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.793 Zero copy mechanism will not be used. 00:18:05.793 Running I/O for 60 seconds... 00:18:05.793 [2024-08-13 06:13:07.402296] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.793 [2024-08-13 06:13:07.407589] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.793 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.052 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.052 "name": "raid_bdev1", 00:18:06.052 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:06.052 "strip_size_kb": 0, 00:18:06.052 "state": "online", 00:18:06.052 "raid_level": "raid1", 00:18:06.052 "superblock": true, 00:18:06.052 "num_base_bdevs": 2, 00:18:06.052 "num_base_bdevs_discovered": 1, 00:18:06.052 "num_base_bdevs_operational": 1, 00:18:06.052 "base_bdevs_list": [ 00:18:06.052 { 00:18:06.052 "name": null, 00:18:06.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.052 "is_configured": false, 00:18:06.052 "data_offset": 2048, 00:18:06.052 "data_size": 63488 00:18:06.052 }, 00:18:06.052 { 00:18:06.052 "name": "BaseBdev2", 00:18:06.052 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:06.052 "is_configured": true, 00:18:06.052 "data_offset": 2048, 00:18:06.052 "data_size": 63488 00:18:06.052 } 00:18:06.052 ] 00:18:06.052 }' 00:18:06.052 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.052 06:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.621 06:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.621 [2024-08-13 06:13:08.389928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.881 06:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:06.881 [2024-08-13 06:13:08.430891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:18:06.881 [2024-08-13 06:13:08.432775] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.881 [2024-08-13 06:13:08.549871] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:06.881 [2024-08-13 06:13:08.550353] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:07.140 [2024-08-13 06:13:08.767651] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:07.140 [2024-08-13 06:13:08.767913] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:07.400 [2024-08-13 06:13:09.126484] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:07.400 [2024-08-13 06:13:09.126794] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.659 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.919 [2024-08-13 06:13:09.465569] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:07.919 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.919 "name": "raid_bdev1", 00:18:07.919 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:07.919 "strip_size_kb": 0, 00:18:07.919 "state": "online", 00:18:07.919 "raid_level": "raid1", 00:18:07.919 "superblock": true, 00:18:07.919 "num_base_bdevs": 2, 00:18:07.919 "num_base_bdevs_discovered": 2, 00:18:07.919 "num_base_bdevs_operational": 2, 00:18:07.919 "process": { 00:18:07.919 "type": "rebuild", 00:18:07.919 "target": "spare", 00:18:07.919 "progress": { 00:18:07.919 "blocks": 14336, 00:18:07.919 "percent": 22 00:18:07.919 } 00:18:07.919 }, 00:18:07.919 "base_bdevs_list": [ 00:18:07.919 { 00:18:07.919 "name": "spare", 00:18:07.919 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:07.919 "is_configured": true, 00:18:07.919 "data_offset": 2048, 00:18:07.919 "data_size": 63488 00:18:07.919 }, 00:18:07.919 { 00:18:07.919 "name": "BaseBdev2", 00:18:07.919 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:07.919 "is_configured": true, 00:18:07.919 "data_offset": 2048, 00:18:07.919 "data_size": 63488 00:18:07.919 } 00:18:07.919 ] 00:18:07.919 }' 00:18:07.919 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:07.919 [2024-08-13 06:13:09.688374] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:07.919 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.919 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:08.178 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.179 06:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:08.179 [2024-08-13 06:13:09.911323] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.439 [2024-08-13 06:13:10.018278] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.439 [2024-08-13 06:13:10.025115] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.439 [2024-08-13 06:13:10.025210] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.439 [2024-08-13 06:13:10.025238] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.439 [2024-08-13 06:13:10.046018] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.439 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.699 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.699 "name": "raid_bdev1", 00:18:08.699 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:08.699 "strip_size_kb": 0, 00:18:08.699 "state": "online", 00:18:08.699 "raid_level": "raid1", 00:18:08.699 "superblock": true, 00:18:08.699 "num_base_bdevs": 2, 00:18:08.699 "num_base_bdevs_discovered": 1, 00:18:08.699 "num_base_bdevs_operational": 1, 00:18:08.699 "base_bdevs_list": [ 00:18:08.699 { 00:18:08.699 "name": null, 00:18:08.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.699 "is_configured": false, 00:18:08.699 "data_offset": 2048, 00:18:08.699 "data_size": 63488 00:18:08.699 }, 00:18:08.699 { 00:18:08.699 "name": "BaseBdev2", 00:18:08.699 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:08.699 "is_configured": true, 00:18:08.699 "data_offset": 2048, 00:18:08.699 "data_size": 63488 00:18:08.699 } 00:18:08.699 ] 00:18:08.699 }' 00:18:08.699 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.699 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.268 06:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.527 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.527 "name": "raid_bdev1", 00:18:09.527 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:09.527 "strip_size_kb": 0, 00:18:09.527 "state": "online", 00:18:09.527 "raid_level": "raid1", 00:18:09.527 "superblock": true, 00:18:09.527 "num_base_bdevs": 2, 00:18:09.527 "num_base_bdevs_discovered": 1, 00:18:09.527 "num_base_bdevs_operational": 1, 00:18:09.527 "base_bdevs_list": [ 00:18:09.527 { 00:18:09.527 "name": null, 00:18:09.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.527 "is_configured": false, 00:18:09.527 "data_offset": 2048, 00:18:09.527 "data_size": 63488 00:18:09.527 }, 00:18:09.527 { 00:18:09.527 "name": "BaseBdev2", 00:18:09.527 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:09.527 "is_configured": true, 00:18:09.527 "data_offset": 2048, 00:18:09.527 "data_size": 63488 00:18:09.527 } 00:18:09.527 ] 00:18:09.527 }' 00:18:09.527 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:09.528 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:09.528 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:09.528 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:09.528 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.787 [2024-08-13 06:13:11.376177] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.787 [2024-08-13 06:13:11.432489] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:18:09.787 [2024-08-13 06:13:11.434254] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.787 06:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:18:09.787 [2024-08-13 06:13:11.551648] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:09.787 [2024-08-13 06:13:11.552060] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:10.046 [2024-08-13 06:13:11.764242] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:10.046 [2024-08-13 06:13:11.764433] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:10.305 [2024-08-13 06:13:12.090129] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:10.564 [2024-08-13 06:13:12.211175] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.824 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.824 [2024-08-13 06:13:12.589862] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.083 "name": "raid_bdev1", 00:18:11.083 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:11.083 "strip_size_kb": 0, 00:18:11.083 "state": "online", 00:18:11.083 "raid_level": "raid1", 00:18:11.083 "superblock": true, 00:18:11.083 "num_base_bdevs": 2, 00:18:11.083 "num_base_bdevs_discovered": 2, 00:18:11.083 "num_base_bdevs_operational": 2, 00:18:11.083 "process": { 00:18:11.083 "type": "rebuild", 00:18:11.083 "target": "spare", 00:18:11.083 "progress": { 00:18:11.083 "blocks": 16384, 00:18:11.083 "percent": 25 00:18:11.083 } 00:18:11.083 }, 00:18:11.083 "base_bdevs_list": [ 00:18:11.083 { 00:18:11.083 "name": "spare", 00:18:11.083 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:11.083 "is_configured": true, 00:18:11.083 "data_offset": 2048, 00:18:11.083 "data_size": 63488 00:18:11.083 }, 00:18:11.083 { 00:18:11.083 "name": "BaseBdev2", 00:18:11.083 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:11.083 "is_configured": true, 00:18:11.083 "data_offset": 2048, 00:18:11.083 "data_size": 63488 00:18:11.083 } 00:18:11.083 ] 00:18:11.083 }' 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:18:11.083 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:18:11.083 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=744 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.084 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.343 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.343 "name": "raid_bdev1", 00:18:11.343 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:11.343 "strip_size_kb": 0, 00:18:11.343 "state": "online", 00:18:11.343 "raid_level": "raid1", 00:18:11.343 "superblock": true, 00:18:11.343 "num_base_bdevs": 2, 00:18:11.343 "num_base_bdevs_discovered": 2, 00:18:11.343 "num_base_bdevs_operational": 2, 00:18:11.343 "process": { 00:18:11.343 "type": "rebuild", 00:18:11.343 "target": "spare", 00:18:11.343 "progress": { 00:18:11.343 "blocks": 18432, 00:18:11.343 "percent": 29 00:18:11.343 } 00:18:11.343 }, 00:18:11.343 "base_bdevs_list": [ 00:18:11.343 { 00:18:11.343 "name": "spare", 00:18:11.343 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:11.343 "is_configured": true, 00:18:11.343 "data_offset": 2048, 00:18:11.343 "data_size": 63488 00:18:11.343 }, 00:18:11.343 { 00:18:11.343 "name": "BaseBdev2", 00:18:11.343 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:11.343 "is_configured": true, 00:18:11.343 "data_offset": 2048, 00:18:11.343 "data_size": 63488 00:18:11.343 } 00:18:11.343 ] 00:18:11.343 }' 00:18:11.343 06:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:11.343 06:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.343 06:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:11.343 [2024-08-13 06:13:13.045092] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:11.343 06:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.343 06:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:11.603 [2024-08-13 06:13:13.279352] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:11.862 [2024-08-13 06:13:13.484909] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:11.862 [2024-08-13 06:13:13.485144] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:12.122 [2024-08-13 06:13:13.794223] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.381 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.381 [2024-08-13 06:13:14.149218] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:12.640 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.640 "name": "raid_bdev1", 00:18:12.640 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:12.640 "strip_size_kb": 0, 00:18:12.640 "state": "online", 00:18:12.640 "raid_level": "raid1", 00:18:12.640 "superblock": true, 00:18:12.640 "num_base_bdevs": 2, 00:18:12.640 "num_base_bdevs_discovered": 2, 00:18:12.640 "num_base_bdevs_operational": 2, 00:18:12.640 "process": { 00:18:12.640 "type": "rebuild", 00:18:12.640 "target": "spare", 00:18:12.640 "progress": { 00:18:12.640 "blocks": 38912, 00:18:12.640 "percent": 61 00:18:12.640 } 00:18:12.640 }, 00:18:12.640 "base_bdevs_list": [ 00:18:12.640 { 00:18:12.640 "name": "spare", 00:18:12.640 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:12.640 "is_configured": true, 00:18:12.640 "data_offset": 2048, 00:18:12.640 "data_size": 63488 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "name": "BaseBdev2", 00:18:12.640 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:12.640 "is_configured": true, 00:18:12.640 "data_offset": 2048, 00:18:12.640 "data_size": 63488 00:18:12.640 } 00:18:12.640 ] 00:18:12.640 }' 00:18:12.640 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:12.640 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.640 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:12.640 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.640 06:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:13.576 [2024-08-13 06:13:15.332100] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.576 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.835 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:13.835 "name": "raid_bdev1", 00:18:13.835 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:13.835 "strip_size_kb": 0, 00:18:13.835 "state": "online", 00:18:13.835 "raid_level": "raid1", 00:18:13.835 "superblock": true, 00:18:13.835 "num_base_bdevs": 2, 00:18:13.835 "num_base_bdevs_discovered": 2, 00:18:13.835 "num_base_bdevs_operational": 2, 00:18:13.835 "process": { 00:18:13.835 "type": "rebuild", 00:18:13.835 "target": "spare", 00:18:13.835 "progress": { 00:18:13.835 "blocks": 61440, 00:18:13.835 "percent": 96 00:18:13.835 } 00:18:13.835 }, 00:18:13.835 "base_bdevs_list": [ 00:18:13.835 { 00:18:13.835 "name": "spare", 00:18:13.835 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:13.835 "is_configured": true, 00:18:13.835 "data_offset": 2048, 00:18:13.835 "data_size": 63488 00:18:13.835 }, 00:18:13.835 { 00:18:13.835 "name": "BaseBdev2", 00:18:13.835 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:13.835 "is_configured": true, 00:18:13.835 "data_offset": 2048, 00:18:13.835 "data_size": 63488 00:18:13.835 } 00:18:13.835 ] 00:18:13.835 }' 00:18:13.835 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:13.835 [2024-08-13 06:13:15.552749] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:13.835 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.835 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:14.094 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.094 06:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:14.094 [2024-08-13 06:13:15.652513] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:14.094 [2024-08-13 06:13:15.654182] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.030 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.289 "name": "raid_bdev1", 00:18:15.289 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:15.289 "strip_size_kb": 0, 00:18:15.289 "state": "online", 00:18:15.289 "raid_level": "raid1", 00:18:15.289 "superblock": true, 00:18:15.289 "num_base_bdevs": 2, 00:18:15.289 "num_base_bdevs_discovered": 2, 00:18:15.289 "num_base_bdevs_operational": 2, 00:18:15.289 "base_bdevs_list": [ 00:18:15.289 { 00:18:15.289 "name": "spare", 00:18:15.289 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:15.289 "is_configured": true, 00:18:15.289 "data_offset": 2048, 00:18:15.289 "data_size": 63488 00:18:15.289 }, 00:18:15.289 { 00:18:15.289 "name": "BaseBdev2", 00:18:15.289 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:15.289 "is_configured": true, 00:18:15.289 "data_offset": 2048, 00:18:15.289 "data_size": 63488 00:18:15.289 } 00:18:15.289 ] 00:18:15.289 }' 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.289 06:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.549 "name": "raid_bdev1", 00:18:15.549 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:15.549 "strip_size_kb": 0, 00:18:15.549 "state": "online", 00:18:15.549 "raid_level": "raid1", 00:18:15.549 "superblock": true, 00:18:15.549 "num_base_bdevs": 2, 00:18:15.549 "num_base_bdevs_discovered": 2, 00:18:15.549 "num_base_bdevs_operational": 2, 00:18:15.549 "base_bdevs_list": [ 00:18:15.549 { 00:18:15.549 "name": "spare", 00:18:15.549 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:15.549 "is_configured": true, 00:18:15.549 "data_offset": 2048, 00:18:15.549 "data_size": 63488 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "name": "BaseBdev2", 00:18:15.549 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:15.549 "is_configured": true, 00:18:15.549 "data_offset": 2048, 00:18:15.549 "data_size": 63488 00:18:15.549 } 00:18:15.549 ] 00:18:15.549 }' 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.549 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.891 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.891 "name": "raid_bdev1", 00:18:15.891 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:15.891 "strip_size_kb": 0, 00:18:15.891 "state": "online", 00:18:15.891 "raid_level": "raid1", 00:18:15.891 "superblock": true, 00:18:15.891 "num_base_bdevs": 2, 00:18:15.891 "num_base_bdevs_discovered": 2, 00:18:15.891 "num_base_bdevs_operational": 2, 00:18:15.891 "base_bdevs_list": [ 00:18:15.891 { 00:18:15.891 "name": "spare", 00:18:15.891 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:15.891 "is_configured": true, 00:18:15.891 "data_offset": 2048, 00:18:15.891 "data_size": 63488 00:18:15.892 }, 00:18:15.892 { 00:18:15.892 "name": "BaseBdev2", 00:18:15.892 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:15.892 "is_configured": true, 00:18:15.892 "data_offset": 2048, 00:18:15.892 "data_size": 63488 00:18:15.892 } 00:18:15.892 ] 00:18:15.892 }' 00:18:15.892 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.892 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.459 06:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:16.459 [2024-08-13 06:13:18.170082] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.459 [2024-08-13 06:13:18.170184] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.459 00:18:16.459 Latency(us) 00:18:16.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.459 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:16.459 raid_bdev1 : 10.88 122.40 367.19 0.00 0.00 11763.03 268.30 111268.11 00:18:16.459 =================================================================================================================== 00:18:16.459 Total : 122.40 367.19 0.00 0.00 11763.03 268.30 111268.11 00:18:16.459 [2024-08-13 06:13:18.205002] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.459 [2024-08-13 06:13:18.205093] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.459 [2024-08-13 06:13:18.205194] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.459 [2024-08-13 06:13:18.205244] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:18:16.459 0 00:18:16.459 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.459 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:18:16.717 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.718 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:18:16.976 /dev/nbd0 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.976 1+0 records in 00:18:16.976 1+0 records out 00:18:16.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377814 s, 10.8 MB/s 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.976 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:17.236 /dev/nbd1 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.236 1+0 records in 00:18:17.236 1+0 records out 00:18:17.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051805 s, 7.9 MB/s 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.236 06:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.496 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:18:17.755 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:18.014 [2024-08-13 06:13:19.756695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.014 [2024-08-13 06:13:19.756835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.014 [2024-08-13 06:13:19.756870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:18.014 [2024-08-13 06:13:19.756899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.014 [2024-08-13 06:13:19.758882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.014 [2024-08-13 06:13:19.758964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.014 [2024-08-13 06:13:19.759076] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:18.014 [2024-08-13 06:13:19.759153] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.014 [2024-08-13 06:13:19.759318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.014 spare 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.014 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.015 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.015 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.015 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.274 [2024-08-13 06:13:19.859249] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:18:18.274 [2024-08-13 06:13:19.859317] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:18.274 [2024-08-13 06:13:19.859591] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:18:18.274 [2024-08-13 06:13:19.859762] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:18:18.274 [2024-08-13 06:13:19.859810] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:18:18.274 [2024-08-13 06:13:19.859964] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.274 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.274 "name": "raid_bdev1", 00:18:18.274 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:18.274 "strip_size_kb": 0, 00:18:18.274 "state": "online", 00:18:18.274 "raid_level": "raid1", 00:18:18.274 "superblock": true, 00:18:18.274 "num_base_bdevs": 2, 00:18:18.274 "num_base_bdevs_discovered": 2, 00:18:18.274 "num_base_bdevs_operational": 2, 00:18:18.274 "base_bdevs_list": [ 00:18:18.274 { 00:18:18.274 "name": "spare", 00:18:18.274 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:18.274 "is_configured": true, 00:18:18.274 "data_offset": 2048, 00:18:18.274 "data_size": 63488 00:18:18.274 }, 00:18:18.274 { 00:18:18.274 "name": "BaseBdev2", 00:18:18.274 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:18.274 "is_configured": true, 00:18:18.274 "data_offset": 2048, 00:18:18.274 "data_size": 63488 00:18:18.274 } 00:18:18.274 ] 00:18:18.274 }' 00:18:18.274 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.274 06:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.842 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.107 "name": "raid_bdev1", 00:18:19.107 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:19.107 "strip_size_kb": 0, 00:18:19.107 "state": "online", 00:18:19.107 "raid_level": "raid1", 00:18:19.107 "superblock": true, 00:18:19.107 "num_base_bdevs": 2, 00:18:19.107 "num_base_bdevs_discovered": 2, 00:18:19.107 "num_base_bdevs_operational": 2, 00:18:19.107 "base_bdevs_list": [ 00:18:19.107 { 00:18:19.107 "name": "spare", 00:18:19.107 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:19.107 "is_configured": true, 00:18:19.107 "data_offset": 2048, 00:18:19.107 "data_size": 63488 00:18:19.107 }, 00:18:19.107 { 00:18:19.107 "name": "BaseBdev2", 00:18:19.107 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:19.107 "is_configured": true, 00:18:19.107 "data_offset": 2048, 00:18:19.107 "data_size": 63488 00:18:19.107 } 00:18:19.107 ] 00:18:19.107 }' 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.107 06:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:19.383 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.383 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:19.655 [2024-08-13 06:13:21.190488] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.655 "name": "raid_bdev1", 00:18:19.655 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:19.655 "strip_size_kb": 0, 00:18:19.655 "state": "online", 00:18:19.655 "raid_level": "raid1", 00:18:19.655 "superblock": true, 00:18:19.655 "num_base_bdevs": 2, 00:18:19.655 "num_base_bdevs_discovered": 1, 00:18:19.655 "num_base_bdevs_operational": 1, 00:18:19.655 "base_bdevs_list": [ 00:18:19.655 { 00:18:19.655 "name": null, 00:18:19.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.655 "is_configured": false, 00:18:19.655 "data_offset": 2048, 00:18:19.655 "data_size": 63488 00:18:19.655 }, 00:18:19.655 { 00:18:19.655 "name": "BaseBdev2", 00:18:19.655 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:19.655 "is_configured": true, 00:18:19.655 "data_offset": 2048, 00:18:19.655 "data_size": 63488 00:18:19.655 } 00:18:19.655 ] 00:18:19.655 }' 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.655 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.223 06:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.482 [2024-08-13 06:13:22.057652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.482 [2024-08-13 06:13:22.057932] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.482 [2024-08-13 06:13:22.057950] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.482 [2024-08-13 06:13:22.058018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.482 [2024-08-13 06:13:22.062422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:18:20.482 [2024-08-13 06:13:22.064124] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.482 06:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.419 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.678 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.678 "name": "raid_bdev1", 00:18:21.678 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:21.678 "strip_size_kb": 0, 00:18:21.678 "state": "online", 00:18:21.678 "raid_level": "raid1", 00:18:21.678 "superblock": true, 00:18:21.678 "num_base_bdevs": 2, 00:18:21.678 "num_base_bdevs_discovered": 2, 00:18:21.678 "num_base_bdevs_operational": 2, 00:18:21.678 "process": { 00:18:21.679 "type": "rebuild", 00:18:21.679 "target": "spare", 00:18:21.679 "progress": { 00:18:21.679 "blocks": 22528, 00:18:21.679 "percent": 35 00:18:21.679 } 00:18:21.679 }, 00:18:21.679 "base_bdevs_list": [ 00:18:21.679 { 00:18:21.679 "name": "spare", 00:18:21.679 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:21.679 "is_configured": true, 00:18:21.679 "data_offset": 2048, 00:18:21.679 "data_size": 63488 00:18:21.679 }, 00:18:21.679 { 00:18:21.679 "name": "BaseBdev2", 00:18:21.679 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:21.679 "is_configured": true, 00:18:21.679 "data_offset": 2048, 00:18:21.679 "data_size": 63488 00:18:21.679 } 00:18:21.679 ] 00:18:21.679 }' 00:18:21.679 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:21.679 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.679 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:21.679 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.679 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:21.938 [2024-08-13 06:13:23.544350] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.938 [2024-08-13 06:13:23.569164] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.938 [2024-08-13 06:13:23.569216] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.938 [2024-08-13 06:13:23.569231] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.938 [2024-08-13 06:13:23.569237] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.938 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.196 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.196 "name": "raid_bdev1", 00:18:22.196 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:22.196 "strip_size_kb": 0, 00:18:22.197 "state": "online", 00:18:22.197 "raid_level": "raid1", 00:18:22.197 "superblock": true, 00:18:22.197 "num_base_bdevs": 2, 00:18:22.197 "num_base_bdevs_discovered": 1, 00:18:22.197 "num_base_bdevs_operational": 1, 00:18:22.197 "base_bdevs_list": [ 00:18:22.197 { 00:18:22.197 "name": null, 00:18:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.197 "is_configured": false, 00:18:22.197 "data_offset": 2048, 00:18:22.197 "data_size": 63488 00:18:22.197 }, 00:18:22.197 { 00:18:22.197 "name": "BaseBdev2", 00:18:22.197 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:22.197 "is_configured": true, 00:18:22.197 "data_offset": 2048, 00:18:22.197 "data_size": 63488 00:18:22.197 } 00:18:22.197 ] 00:18:22.197 }' 00:18:22.197 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.197 06:13:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.764 06:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:22.764 [2024-08-13 06:13:24.539984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.764 [2024-08-13 06:13:24.540110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.764 [2024-08-13 06:13:24.540151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:22.764 [2024-08-13 06:13:24.540177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.764 [2024-08-13 06:13:24.540592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.764 [2024-08-13 06:13:24.540651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.764 [2024-08-13 06:13:24.540753] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:22.764 [2024-08-13 06:13:24.540789] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.764 [2024-08-13 06:13:24.540826] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:22.764 [2024-08-13 06:13:24.540879] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.764 [2024-08-13 06:13:24.544998] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:18:22.764 spare 00:18:22.764 [2024-08-13 06:13:24.546711] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.023 06:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.960 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.220 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:24.220 "name": "raid_bdev1", 00:18:24.220 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:24.220 "strip_size_kb": 0, 00:18:24.220 "state": "online", 00:18:24.220 "raid_level": "raid1", 00:18:24.220 "superblock": true, 00:18:24.220 "num_base_bdevs": 2, 00:18:24.220 "num_base_bdevs_discovered": 2, 00:18:24.220 "num_base_bdevs_operational": 2, 00:18:24.220 "process": { 00:18:24.220 "type": "rebuild", 00:18:24.220 "target": "spare", 00:18:24.220 "progress": { 00:18:24.220 "blocks": 22528, 00:18:24.220 "percent": 35 00:18:24.220 } 00:18:24.220 }, 00:18:24.220 "base_bdevs_list": [ 00:18:24.220 { 00:18:24.220 "name": "spare", 00:18:24.220 "uuid": "4cf9dd16-b7a0-5f53-b963-876e90ec6c26", 00:18:24.220 "is_configured": true, 00:18:24.220 "data_offset": 2048, 00:18:24.220 "data_size": 63488 00:18:24.220 }, 00:18:24.220 { 00:18:24.220 "name": "BaseBdev2", 00:18:24.220 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:24.220 "is_configured": true, 00:18:24.220 "data_offset": 2048, 00:18:24.220 "data_size": 63488 00:18:24.220 } 00:18:24.220 ] 00:18:24.220 }' 00:18:24.220 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:24.220 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.220 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:24.220 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.220 06:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:24.479 [2024-08-13 06:13:26.011454] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.479 [2024-08-13 06:13:26.051754] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:24.479 [2024-08-13 06:13:26.051815] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.479 [2024-08-13 06:13:26.051830] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.479 [2024-08-13 06:13:26.051843] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.479 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.739 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.739 "name": "raid_bdev1", 00:18:24.739 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:24.739 "strip_size_kb": 0, 00:18:24.739 "state": "online", 00:18:24.739 "raid_level": "raid1", 00:18:24.739 "superblock": true, 00:18:24.739 "num_base_bdevs": 2, 00:18:24.739 "num_base_bdevs_discovered": 1, 00:18:24.739 "num_base_bdevs_operational": 1, 00:18:24.739 "base_bdevs_list": [ 00:18:24.739 { 00:18:24.739 "name": null, 00:18:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.739 "is_configured": false, 00:18:24.739 "data_offset": 2048, 00:18:24.739 "data_size": 63488 00:18:24.739 }, 00:18:24.739 { 00:18:24.739 "name": "BaseBdev2", 00:18:24.739 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:24.739 "is_configured": true, 00:18:24.739 "data_offset": 2048, 00:18:24.739 "data_size": 63488 00:18:24.739 } 00:18:24.739 ] 00:18:24.739 }' 00:18:24.739 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.739 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.307 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.307 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:25.308 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:25.308 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:25.308 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:25.308 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.308 06:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.308 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.308 "name": "raid_bdev1", 00:18:25.308 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:25.308 "strip_size_kb": 0, 00:18:25.308 "state": "online", 00:18:25.308 "raid_level": "raid1", 00:18:25.308 "superblock": true, 00:18:25.308 "num_base_bdevs": 2, 00:18:25.308 "num_base_bdevs_discovered": 1, 00:18:25.308 "num_base_bdevs_operational": 1, 00:18:25.308 "base_bdevs_list": [ 00:18:25.308 { 00:18:25.308 "name": null, 00:18:25.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.308 "is_configured": false, 00:18:25.308 "data_offset": 2048, 00:18:25.308 "data_size": 63488 00:18:25.308 }, 00:18:25.308 { 00:18:25.308 "name": "BaseBdev2", 00:18:25.308 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:25.308 "is_configured": true, 00:18:25.308 "data_offset": 2048, 00:18:25.308 "data_size": 63488 00:18:25.308 } 00:18:25.308 ] 00:18:25.308 }' 00:18:25.308 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:25.308 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:25.308 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:25.308 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:25.308 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:25.567 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:25.826 [2024-08-13 06:13:27.462093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:25.826 [2024-08-13 06:13:27.462212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.826 [2024-08-13 06:13:27.462250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:25.826 [2024-08-13 06:13:27.462263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.826 [2024-08-13 06:13:27.462666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.826 [2024-08-13 06:13:27.462688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:25.826 [2024-08-13 06:13:27.462763] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:25.826 [2024-08-13 06:13:27.462779] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:25.826 [2024-08-13 06:13:27.462788] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:25.826 BaseBdev1 00:18:25.826 06:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.765 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.024 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.024 "name": "raid_bdev1", 00:18:27.024 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:27.024 "strip_size_kb": 0, 00:18:27.024 "state": "online", 00:18:27.024 "raid_level": "raid1", 00:18:27.024 "superblock": true, 00:18:27.024 "num_base_bdevs": 2, 00:18:27.024 "num_base_bdevs_discovered": 1, 00:18:27.024 "num_base_bdevs_operational": 1, 00:18:27.024 "base_bdevs_list": [ 00:18:27.024 { 00:18:27.024 "name": null, 00:18:27.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.024 "is_configured": false, 00:18:27.024 "data_offset": 2048, 00:18:27.024 "data_size": 63488 00:18:27.024 }, 00:18:27.024 { 00:18:27.024 "name": "BaseBdev2", 00:18:27.024 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:27.024 "is_configured": true, 00:18:27.024 "data_offset": 2048, 00:18:27.024 "data_size": 63488 00:18:27.024 } 00:18:27.024 ] 00:18:27.024 }' 00:18:27.024 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.024 06:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.593 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.852 "name": "raid_bdev1", 00:18:27.852 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:27.852 "strip_size_kb": 0, 00:18:27.852 "state": "online", 00:18:27.852 "raid_level": "raid1", 00:18:27.852 "superblock": true, 00:18:27.852 "num_base_bdevs": 2, 00:18:27.852 "num_base_bdevs_discovered": 1, 00:18:27.852 "num_base_bdevs_operational": 1, 00:18:27.852 "base_bdevs_list": [ 00:18:27.852 { 00:18:27.852 "name": null, 00:18:27.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.852 "is_configured": false, 00:18:27.852 "data_offset": 2048, 00:18:27.852 "data_size": 63488 00:18:27.852 }, 00:18:27.852 { 00:18:27.852 "name": "BaseBdev2", 00:18:27.852 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:27.852 "is_configured": true, 00:18:27.852 "data_offset": 2048, 00:18:27.852 "data_size": 63488 00:18:27.852 } 00:18:27.852 ] 00:18:27.852 }' 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@646 -- # local es=0 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:27.852 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.111 [2024-08-13 06:13:29.726450] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.111 [2024-08-13 06:13:29.726665] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.111 [2024-08-13 06:13:29.726690] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.111 request: 00:18:28.111 { 00:18:28.111 "base_bdev": "BaseBdev1", 00:18:28.111 "raid_bdev": "raid_bdev1", 00:18:28.111 "method": "bdev_raid_add_base_bdev", 00:18:28.111 "req_id": 1 00:18:28.111 } 00:18:28.111 Got JSON-RPC error response 00:18:28.111 response: 00:18:28.111 { 00:18:28.111 "code": -22, 00:18:28.111 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:28.111 } 00:18:28.111 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # es=1 00:18:28.111 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:18:28.111 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:18:28.111 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:18:28.111 06:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.048 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.049 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.049 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.308 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.308 "name": "raid_bdev1", 00:18:29.308 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:29.308 "strip_size_kb": 0, 00:18:29.308 "state": "online", 00:18:29.308 "raid_level": "raid1", 00:18:29.308 "superblock": true, 00:18:29.308 "num_base_bdevs": 2, 00:18:29.308 "num_base_bdevs_discovered": 1, 00:18:29.308 "num_base_bdevs_operational": 1, 00:18:29.308 "base_bdevs_list": [ 00:18:29.308 { 00:18:29.308 "name": null, 00:18:29.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.308 "is_configured": false, 00:18:29.308 "data_offset": 2048, 00:18:29.308 "data_size": 63488 00:18:29.308 }, 00:18:29.308 { 00:18:29.308 "name": "BaseBdev2", 00:18:29.308 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:29.308 "is_configured": true, 00:18:29.308 "data_offset": 2048, 00:18:29.308 "data_size": 63488 00:18:29.308 } 00:18:29.308 ] 00:18:29.308 }' 00:18:29.308 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.308 06:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.876 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.136 "name": "raid_bdev1", 00:18:30.136 "uuid": "5e5cfe7c-f641-4c50-a741-1f8ac9337f5c", 00:18:30.136 "strip_size_kb": 0, 00:18:30.136 "state": "online", 00:18:30.136 "raid_level": "raid1", 00:18:30.136 "superblock": true, 00:18:30.136 "num_base_bdevs": 2, 00:18:30.136 "num_base_bdevs_discovered": 1, 00:18:30.136 "num_base_bdevs_operational": 1, 00:18:30.136 "base_bdevs_list": [ 00:18:30.136 { 00:18:30.136 "name": null, 00:18:30.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.136 "is_configured": false, 00:18:30.136 "data_offset": 2048, 00:18:30.136 "data_size": 63488 00:18:30.136 }, 00:18:30.136 { 00:18:30.136 "name": "BaseBdev2", 00:18:30.136 "uuid": "0e56123c-cbfa-5231-9887-ec51f8af428f", 00:18:30.136 "is_configured": true, 00:18:30.136 "data_offset": 2048, 00:18:30.136 "data_size": 63488 00:18:30.136 } 00:18:30.136 ] 00:18:30.136 }' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 93818 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 93818 ']' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 93818 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93818 00:18:30.136 killing process with pid 93818 00:18:30.136 Received shutdown signal, test time was about 24.569653 seconds 00:18:30.136 00:18:30.136 Latency(us) 00:18:30.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.136 =================================================================================================================== 00:18:30.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93818' 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 93818 00:18:30.136 [2024-08-13 06:13:31.862512] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.136 [2024-08-13 06:13:31.862653] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.136 06:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 93818 00:18:30.136 [2024-08-13 06:13:31.862707] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.136 [2024-08-13 06:13:31.862723] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:18:30.136 [2024-08-13 06:13:31.888198] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.396 06:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:18:30.396 00:18:30.396 real 0m28.571s 00:18:30.396 user 0m44.237s 00:18:30.396 sys 0m3.765s 00:18:30.396 06:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:30.396 06:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.396 ************************************ 00:18:30.396 END TEST raid_rebuild_test_sb_io 00:18:30.397 ************************************ 00:18:30.657 06:13:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:18:30.657 06:13:32 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:18:30.657 06:13:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:30.657 06:13:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:30.657 06:13:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.657 ************************************ 00:18:30.657 START TEST raid_rebuild_test 00:18:30.657 ************************************ 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false false true 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=94616 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 94616 /var/tmp/spdk-raid.sock 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 94616 ']' 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:30.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:30.657 06:13:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.657 [2024-08-13 06:13:32.312714] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:18:30.657 [2024-08-13 06:13:32.312912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:30.657 Zero copy mechanism will not be used. 00:18:30.657 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94616 ] 00:18:30.917 [2024-08-13 06:13:32.457725] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.917 [2024-08-13 06:13:32.504505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.917 [2024-08-13 06:13:32.547062] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.917 [2024-08-13 06:13:32.547172] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.485 06:13:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.485 06:13:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:18:31.485 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:31.485 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:31.744 BaseBdev1_malloc 00:18:31.744 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.745 [2024-08-13 06:13:33.519521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.745 [2024-08-13 06:13:33.519663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.745 [2024-08-13 06:13:33.519692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:31.745 [2024-08-13 06:13:33.519703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.745 [2024-08-13 06:13:33.521751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.745 [2024-08-13 06:13:33.521800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.745 BaseBdev1 00:18:32.004 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:32.004 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:32.004 BaseBdev2_malloc 00:18:32.004 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:32.262 [2024-08-13 06:13:33.923198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:32.262 [2024-08-13 06:13:33.923332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.262 [2024-08-13 06:13:33.923357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:32.262 [2024-08-13 06:13:33.923367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.262 [2024-08-13 06:13:33.925291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.263 [2024-08-13 06:13:33.925331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:32.263 BaseBdev2 00:18:32.263 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:32.263 06:13:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:32.522 BaseBdev3_malloc 00:18:32.522 06:13:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:32.781 [2024-08-13 06:13:34.368219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:32.781 [2024-08-13 06:13:34.368279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.781 [2024-08-13 06:13:34.368299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:32.781 [2024-08-13 06:13:34.368309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.781 [2024-08-13 06:13:34.370219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.781 [2024-08-13 06:13:34.370260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:32.781 BaseBdev3 00:18:32.781 06:13:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:32.781 06:13:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:33.040 BaseBdev4_malloc 00:18:33.040 06:13:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:33.040 [2024-08-13 06:13:34.768085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:33.040 [2024-08-13 06:13:34.768137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.040 [2024-08-13 06:13:34.768155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:33.040 [2024-08-13 06:13:34.768167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.040 [2024-08-13 06:13:34.770066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.040 [2024-08-13 06:13:34.770104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:33.040 BaseBdev4 00:18:33.040 06:13:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:33.299 spare_malloc 00:18:33.299 06:13:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:33.558 spare_delay 00:18:33.558 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:33.558 [2024-08-13 06:13:35.303333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.558 [2024-08-13 06:13:35.303385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.558 [2024-08-13 06:13:35.303402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:33.558 [2024-08-13 06:13:35.303412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.558 [2024-08-13 06:13:35.305332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.558 [2024-08-13 06:13:35.305429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.558 spare 00:18:33.558 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:18:33.817 [2024-08-13 06:13:35.495112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.817 [2024-08-13 06:13:35.496873] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.817 [2024-08-13 06:13:35.496935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.817 [2024-08-13 06:13:35.496976] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:33.817 [2024-08-13 06:13:35.497074] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:18:33.817 [2024-08-13 06:13:35.497095] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:33.817 [2024-08-13 06:13:35.497377] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:18:33.817 [2024-08-13 06:13:35.497519] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:18:33.817 [2024-08-13 06:13:35.497537] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:18:33.817 [2024-08-13 06:13:35.497682] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.817 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.077 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.077 "name": "raid_bdev1", 00:18:34.077 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:34.077 "strip_size_kb": 0, 00:18:34.077 "state": "online", 00:18:34.077 "raid_level": "raid1", 00:18:34.077 "superblock": false, 00:18:34.077 "num_base_bdevs": 4, 00:18:34.077 "num_base_bdevs_discovered": 4, 00:18:34.077 "num_base_bdevs_operational": 4, 00:18:34.077 "base_bdevs_list": [ 00:18:34.077 { 00:18:34.077 "name": "BaseBdev1", 00:18:34.077 "uuid": "df3cd5c8-e686-56e4-ab28-44bf5ecfa059", 00:18:34.077 "is_configured": true, 00:18:34.077 "data_offset": 0, 00:18:34.077 "data_size": 65536 00:18:34.077 }, 00:18:34.077 { 00:18:34.077 "name": "BaseBdev2", 00:18:34.077 "uuid": "96a42ea0-7847-5371-bc09-7d79ba424ac9", 00:18:34.077 "is_configured": true, 00:18:34.077 "data_offset": 0, 00:18:34.077 "data_size": 65536 00:18:34.077 }, 00:18:34.077 { 00:18:34.077 "name": "BaseBdev3", 00:18:34.077 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:34.077 "is_configured": true, 00:18:34.077 "data_offset": 0, 00:18:34.077 "data_size": 65536 00:18:34.077 }, 00:18:34.077 { 00:18:34.077 "name": "BaseBdev4", 00:18:34.077 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:34.077 "is_configured": true, 00:18:34.077 "data_offset": 0, 00:18:34.077 "data_size": 65536 00:18:34.077 } 00:18:34.077 ] 00:18:34.077 }' 00:18:34.077 06:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.077 06:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.644 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:34.644 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:18:34.644 [2024-08-13 06:13:36.401890] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.644 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:18:34.644 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.644 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.903 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:35.162 [2024-08-13 06:13:36.812880] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:18:35.162 /dev/nbd0 00:18:35.162 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:35.162 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:35.162 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:35.163 1+0 records in 00:18:35.163 1+0 records out 00:18:35.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551859 s, 7.4 MB/s 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:18:35.163 06:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:40.434 65536+0 records in 00:18:40.434 65536+0 records out 00:18:40.434 33554432 bytes (34 MB, 32 MiB) copied, 5.31878 s, 6.3 MB/s 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.434 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:40.693 [2024-08-13 06:13:42.425786] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.693 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:40.953 [2024-08-13 06:13:42.629503] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.953 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.212 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.212 "name": "raid_bdev1", 00:18:41.212 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:41.212 "strip_size_kb": 0, 00:18:41.212 "state": "online", 00:18:41.212 "raid_level": "raid1", 00:18:41.212 "superblock": false, 00:18:41.212 "num_base_bdevs": 4, 00:18:41.212 "num_base_bdevs_discovered": 3, 00:18:41.212 "num_base_bdevs_operational": 3, 00:18:41.212 "base_bdevs_list": [ 00:18:41.212 { 00:18:41.212 "name": null, 00:18:41.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.212 "is_configured": false, 00:18:41.212 "data_offset": 0, 00:18:41.212 "data_size": 65536 00:18:41.212 }, 00:18:41.212 { 00:18:41.212 "name": "BaseBdev2", 00:18:41.212 "uuid": "96a42ea0-7847-5371-bc09-7d79ba424ac9", 00:18:41.212 "is_configured": true, 00:18:41.212 "data_offset": 0, 00:18:41.212 "data_size": 65536 00:18:41.212 }, 00:18:41.212 { 00:18:41.212 "name": "BaseBdev3", 00:18:41.212 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:41.212 "is_configured": true, 00:18:41.212 "data_offset": 0, 00:18:41.212 "data_size": 65536 00:18:41.212 }, 00:18:41.212 { 00:18:41.212 "name": "BaseBdev4", 00:18:41.212 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:41.212 "is_configured": true, 00:18:41.212 "data_offset": 0, 00:18:41.212 "data_size": 65536 00:18:41.212 } 00:18:41.212 ] 00:18:41.212 }' 00:18:41.212 06:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.212 06:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.780 06:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.042 [2024-08-13 06:13:43.591999] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.042 [2024-08-13 06:13:43.595374] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:18:42.042 [2024-08-13 06:13:43.597066] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.042 06:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.977 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.237 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.237 "name": "raid_bdev1", 00:18:43.238 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:43.238 "strip_size_kb": 0, 00:18:43.238 "state": "online", 00:18:43.238 "raid_level": "raid1", 00:18:43.238 "superblock": false, 00:18:43.238 "num_base_bdevs": 4, 00:18:43.238 "num_base_bdevs_discovered": 4, 00:18:43.238 "num_base_bdevs_operational": 4, 00:18:43.238 "process": { 00:18:43.238 "type": "rebuild", 00:18:43.238 "target": "spare", 00:18:43.238 "progress": { 00:18:43.238 "blocks": 24576, 00:18:43.238 "percent": 37 00:18:43.238 } 00:18:43.238 }, 00:18:43.238 "base_bdevs_list": [ 00:18:43.238 { 00:18:43.238 "name": "spare", 00:18:43.238 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:43.238 "is_configured": true, 00:18:43.238 "data_offset": 0, 00:18:43.238 "data_size": 65536 00:18:43.238 }, 00:18:43.238 { 00:18:43.238 "name": "BaseBdev2", 00:18:43.238 "uuid": "96a42ea0-7847-5371-bc09-7d79ba424ac9", 00:18:43.238 "is_configured": true, 00:18:43.238 "data_offset": 0, 00:18:43.238 "data_size": 65536 00:18:43.238 }, 00:18:43.238 { 00:18:43.238 "name": "BaseBdev3", 00:18:43.238 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:43.238 "is_configured": true, 00:18:43.238 "data_offset": 0, 00:18:43.238 "data_size": 65536 00:18:43.238 }, 00:18:43.238 { 00:18:43.238 "name": "BaseBdev4", 00:18:43.238 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:43.238 "is_configured": true, 00:18:43.238 "data_offset": 0, 00:18:43.238 "data_size": 65536 00:18:43.238 } 00:18:43.238 ] 00:18:43.238 }' 00:18:43.238 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:43.238 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.238 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:43.238 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.238 06:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:43.497 [2024-08-13 06:13:45.103679] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.497 [2024-08-13 06:13:45.203019] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.497 [2024-08-13 06:13:45.203092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.497 [2024-08-13 06:13:45.203107] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.497 [2024-08-13 06:13:45.203116] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.497 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.756 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.756 "name": "raid_bdev1", 00:18:43.756 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:43.756 "strip_size_kb": 0, 00:18:43.756 "state": "online", 00:18:43.756 "raid_level": "raid1", 00:18:43.756 "superblock": false, 00:18:43.757 "num_base_bdevs": 4, 00:18:43.757 "num_base_bdevs_discovered": 3, 00:18:43.757 "num_base_bdevs_operational": 3, 00:18:43.757 "base_bdevs_list": [ 00:18:43.757 { 00:18:43.757 "name": null, 00:18:43.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.757 "is_configured": false, 00:18:43.757 "data_offset": 0, 00:18:43.757 "data_size": 65536 00:18:43.757 }, 00:18:43.757 { 00:18:43.757 "name": "BaseBdev2", 00:18:43.757 "uuid": "96a42ea0-7847-5371-bc09-7d79ba424ac9", 00:18:43.757 "is_configured": true, 00:18:43.757 "data_offset": 0, 00:18:43.757 "data_size": 65536 00:18:43.757 }, 00:18:43.757 { 00:18:43.757 "name": "BaseBdev3", 00:18:43.757 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:43.757 "is_configured": true, 00:18:43.757 "data_offset": 0, 00:18:43.757 "data_size": 65536 00:18:43.757 }, 00:18:43.757 { 00:18:43.757 "name": "BaseBdev4", 00:18:43.757 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:43.757 "is_configured": true, 00:18:43.757 "data_offset": 0, 00:18:43.757 "data_size": 65536 00:18:43.757 } 00:18:43.757 ] 00:18:43.757 }' 00:18:43.757 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.757 06:13:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.333 06:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.608 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.608 "name": "raid_bdev1", 00:18:44.608 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:44.608 "strip_size_kb": 0, 00:18:44.608 "state": "online", 00:18:44.608 "raid_level": "raid1", 00:18:44.608 "superblock": false, 00:18:44.608 "num_base_bdevs": 4, 00:18:44.608 "num_base_bdevs_discovered": 3, 00:18:44.608 "num_base_bdevs_operational": 3, 00:18:44.608 "base_bdevs_list": [ 00:18:44.608 { 00:18:44.608 "name": null, 00:18:44.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.608 "is_configured": false, 00:18:44.608 "data_offset": 0, 00:18:44.608 "data_size": 65536 00:18:44.608 }, 00:18:44.608 { 00:18:44.608 "name": "BaseBdev2", 00:18:44.608 "uuid": "96a42ea0-7847-5371-bc09-7d79ba424ac9", 00:18:44.608 "is_configured": true, 00:18:44.608 "data_offset": 0, 00:18:44.608 "data_size": 65536 00:18:44.608 }, 00:18:44.608 { 00:18:44.608 "name": "BaseBdev3", 00:18:44.608 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:44.608 "is_configured": true, 00:18:44.608 "data_offset": 0, 00:18:44.608 "data_size": 65536 00:18:44.608 }, 00:18:44.608 { 00:18:44.608 "name": "BaseBdev4", 00:18:44.608 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:44.608 "is_configured": true, 00:18:44.608 "data_offset": 0, 00:18:44.608 "data_size": 65536 00:18:44.608 } 00:18:44.608 ] 00:18:44.608 }' 00:18:44.608 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:44.608 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:44.608 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:44.608 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:44.608 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:44.876 [2024-08-13 06:13:46.468876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.876 [2024-08-13 06:13:46.472223] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:18:44.876 [2024-08-13 06:13:46.473910] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.876 06:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.814 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.074 "name": "raid_bdev1", 00:18:46.074 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:46.074 "strip_size_kb": 0, 00:18:46.074 "state": "online", 00:18:46.074 "raid_level": "raid1", 00:18:46.074 "superblock": false, 00:18:46.074 "num_base_bdevs": 4, 00:18:46.074 "num_base_bdevs_discovered": 4, 00:18:46.074 "num_base_bdevs_operational": 4, 00:18:46.074 "process": { 00:18:46.074 "type": "rebuild", 00:18:46.074 "target": "spare", 00:18:46.074 "progress": { 00:18:46.074 "blocks": 24576, 00:18:46.074 "percent": 37 00:18:46.074 } 00:18:46.074 }, 00:18:46.074 "base_bdevs_list": [ 00:18:46.074 { 00:18:46.074 "name": "spare", 00:18:46.074 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 }, 00:18:46.074 { 00:18:46.074 "name": "BaseBdev2", 00:18:46.074 "uuid": "96a42ea0-7847-5371-bc09-7d79ba424ac9", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 }, 00:18:46.074 { 00:18:46.074 "name": "BaseBdev3", 00:18:46.074 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 }, 00:18:46.074 { 00:18:46.074 "name": "BaseBdev4", 00:18:46.074 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 } 00:18:46.074 ] 00:18:46.074 }' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:18:46.074 06:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:46.333 [2024-08-13 06:13:48.011950] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:46.333 [2024-08-13 06:13:48.079070] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.333 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.593 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.593 "name": "raid_bdev1", 00:18:46.593 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:46.593 "strip_size_kb": 0, 00:18:46.593 "state": "online", 00:18:46.593 "raid_level": "raid1", 00:18:46.593 "superblock": false, 00:18:46.593 "num_base_bdevs": 4, 00:18:46.593 "num_base_bdevs_discovered": 3, 00:18:46.593 "num_base_bdevs_operational": 3, 00:18:46.593 "process": { 00:18:46.593 "type": "rebuild", 00:18:46.593 "target": "spare", 00:18:46.593 "progress": { 00:18:46.593 "blocks": 36864, 00:18:46.593 "percent": 56 00:18:46.593 } 00:18:46.593 }, 00:18:46.593 "base_bdevs_list": [ 00:18:46.593 { 00:18:46.593 "name": "spare", 00:18:46.593 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:46.593 "is_configured": true, 00:18:46.593 "data_offset": 0, 00:18:46.593 "data_size": 65536 00:18:46.593 }, 00:18:46.593 { 00:18:46.593 "name": null, 00:18:46.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.593 "is_configured": false, 00:18:46.593 "data_offset": 0, 00:18:46.593 "data_size": 65536 00:18:46.593 }, 00:18:46.593 { 00:18:46.593 "name": "BaseBdev3", 00:18:46.593 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:46.593 "is_configured": true, 00:18:46.593 "data_offset": 0, 00:18:46.593 "data_size": 65536 00:18:46.593 }, 00:18:46.593 { 00:18:46.593 "name": "BaseBdev4", 00:18:46.593 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:46.593 "is_configured": true, 00:18:46.593 "data_offset": 0, 00:18:46.593 "data_size": 65536 00:18:46.593 } 00:18:46.593 ] 00:18:46.593 }' 00:18:46.593 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:46.593 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.593 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=780 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.852 "name": "raid_bdev1", 00:18:46.852 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:46.852 "strip_size_kb": 0, 00:18:46.852 "state": "online", 00:18:46.852 "raid_level": "raid1", 00:18:46.852 "superblock": false, 00:18:46.852 "num_base_bdevs": 4, 00:18:46.852 "num_base_bdevs_discovered": 3, 00:18:46.852 "num_base_bdevs_operational": 3, 00:18:46.852 "process": { 00:18:46.852 "type": "rebuild", 00:18:46.852 "target": "spare", 00:18:46.852 "progress": { 00:18:46.852 "blocks": 43008, 00:18:46.852 "percent": 65 00:18:46.852 } 00:18:46.852 }, 00:18:46.852 "base_bdevs_list": [ 00:18:46.852 { 00:18:46.852 "name": "spare", 00:18:46.852 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:46.852 "is_configured": true, 00:18:46.852 "data_offset": 0, 00:18:46.852 "data_size": 65536 00:18:46.852 }, 00:18:46.852 { 00:18:46.852 "name": null, 00:18:46.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.852 "is_configured": false, 00:18:46.852 "data_offset": 0, 00:18:46.852 "data_size": 65536 00:18:46.852 }, 00:18:46.852 { 00:18:46.852 "name": "BaseBdev3", 00:18:46.852 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:46.852 "is_configured": true, 00:18:46.852 "data_offset": 0, 00:18:46.852 "data_size": 65536 00:18:46.852 }, 00:18:46.852 { 00:18:46.852 "name": "BaseBdev4", 00:18:46.852 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:46.852 "is_configured": true, 00:18:46.852 "data_offset": 0, 00:18:46.852 "data_size": 65536 00:18:46.852 } 00:18:46.852 ] 00:18:46.852 }' 00:18:46.852 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:47.111 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.111 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:47.111 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.111 06:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:18:48.048 [2024-08-13 06:13:49.684617] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:48.048 [2024-08-13 06:13:49.684679] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:48.048 [2024-08-13 06:13:49.684721] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.048 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.307 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.307 "name": "raid_bdev1", 00:18:48.307 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:48.307 "strip_size_kb": 0, 00:18:48.307 "state": "online", 00:18:48.307 "raid_level": "raid1", 00:18:48.307 "superblock": false, 00:18:48.307 "num_base_bdevs": 4, 00:18:48.307 "num_base_bdevs_discovered": 3, 00:18:48.307 "num_base_bdevs_operational": 3, 00:18:48.307 "base_bdevs_list": [ 00:18:48.307 { 00:18:48.307 "name": "spare", 00:18:48.307 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:48.307 "is_configured": true, 00:18:48.307 "data_offset": 0, 00:18:48.307 "data_size": 65536 00:18:48.307 }, 00:18:48.307 { 00:18:48.307 "name": null, 00:18:48.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.307 "is_configured": false, 00:18:48.307 "data_offset": 0, 00:18:48.307 "data_size": 65536 00:18:48.307 }, 00:18:48.307 { 00:18:48.307 "name": "BaseBdev3", 00:18:48.307 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:48.307 "is_configured": true, 00:18:48.308 "data_offset": 0, 00:18:48.308 "data_size": 65536 00:18:48.308 }, 00:18:48.308 { 00:18:48.308 "name": "BaseBdev4", 00:18:48.308 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:48.308 "is_configured": true, 00:18:48.308 "data_offset": 0, 00:18:48.308 "data_size": 65536 00:18:48.308 } 00:18:48.308 ] 00:18:48.308 }' 00:18:48.308 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:48.308 06:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.308 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.567 "name": "raid_bdev1", 00:18:48.567 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:48.567 "strip_size_kb": 0, 00:18:48.567 "state": "online", 00:18:48.567 "raid_level": "raid1", 00:18:48.567 "superblock": false, 00:18:48.567 "num_base_bdevs": 4, 00:18:48.567 "num_base_bdevs_discovered": 3, 00:18:48.567 "num_base_bdevs_operational": 3, 00:18:48.567 "base_bdevs_list": [ 00:18:48.567 { 00:18:48.567 "name": "spare", 00:18:48.567 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:48.567 "is_configured": true, 00:18:48.567 "data_offset": 0, 00:18:48.567 "data_size": 65536 00:18:48.567 }, 00:18:48.567 { 00:18:48.567 "name": null, 00:18:48.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.567 "is_configured": false, 00:18:48.567 "data_offset": 0, 00:18:48.567 "data_size": 65536 00:18:48.567 }, 00:18:48.567 { 00:18:48.567 "name": "BaseBdev3", 00:18:48.567 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:48.567 "is_configured": true, 00:18:48.567 "data_offset": 0, 00:18:48.567 "data_size": 65536 00:18:48.567 }, 00:18:48.567 { 00:18:48.567 "name": "BaseBdev4", 00:18:48.567 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:48.567 "is_configured": true, 00:18:48.567 "data_offset": 0, 00:18:48.567 "data_size": 65536 00:18:48.567 } 00:18:48.567 ] 00:18:48.567 }' 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.567 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.827 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.827 "name": "raid_bdev1", 00:18:48.827 "uuid": "4d33909b-ea3a-4e51-ba26-cd07b49bd2c3", 00:18:48.827 "strip_size_kb": 0, 00:18:48.827 "state": "online", 00:18:48.827 "raid_level": "raid1", 00:18:48.827 "superblock": false, 00:18:48.827 "num_base_bdevs": 4, 00:18:48.827 "num_base_bdevs_discovered": 3, 00:18:48.827 "num_base_bdevs_operational": 3, 00:18:48.827 "base_bdevs_list": [ 00:18:48.827 { 00:18:48.827 "name": "spare", 00:18:48.827 "uuid": "97a15eb6-dc13-52e4-90aa-be654522b37a", 00:18:48.827 "is_configured": true, 00:18:48.827 "data_offset": 0, 00:18:48.827 "data_size": 65536 00:18:48.827 }, 00:18:48.827 { 00:18:48.827 "name": null, 00:18:48.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.827 "is_configured": false, 00:18:48.827 "data_offset": 0, 00:18:48.827 "data_size": 65536 00:18:48.827 }, 00:18:48.827 { 00:18:48.827 "name": "BaseBdev3", 00:18:48.827 "uuid": "3470422d-7dd0-57dd-a3b7-bc0af4aabbdb", 00:18:48.827 "is_configured": true, 00:18:48.827 "data_offset": 0, 00:18:48.827 "data_size": 65536 00:18:48.827 }, 00:18:48.827 { 00:18:48.827 "name": "BaseBdev4", 00:18:48.827 "uuid": "01747fc6-ac55-5863-b9ca-9ccdca2f1bf0", 00:18:48.827 "is_configured": true, 00:18:48.827 "data_offset": 0, 00:18:48.827 "data_size": 65536 00:18:48.827 } 00:18:48.827 ] 00:18:48.827 }' 00:18:48.827 06:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.827 06:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.395 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:49.655 [2024-08-13 06:13:51.257102] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.655 [2024-08-13 06:13:51.257195] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.655 [2024-08-13 06:13:51.257302] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.655 [2024-08-13 06:13:51.257393] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.655 [2024-08-13 06:13:51.257434] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:49.655 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:49.915 /dev/nbd0 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:49.915 1+0 records in 00:18:49.915 1+0 records out 00:18:49.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403139 s, 10.2 MB/s 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:49.915 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:50.175 /dev/nbd1 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:50.175 1+0 records in 00:18:50.175 1+0 records out 00:18:50.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434173 s, 9.4 MB/s 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.175 06:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.435 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 94616 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 94616 ']' 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 94616 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94616 00:18:50.695 killing process with pid 94616 00:18:50.695 Received shutdown signal, test time was about 60.000000 seconds 00:18:50.695 00:18:50.695 Latency(us) 00:18:50.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.695 =================================================================================================================== 00:18:50.695 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94616' 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 94616 00:18:50.695 [2024-08-13 06:13:52.390039] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.695 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 94616 00:18:50.695 [2024-08-13 06:13:52.440169] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.955 06:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:18:50.955 00:18:50.955 real 0m20.455s 00:18:50.955 user 0m27.761s 00:18:50.955 sys 0m3.836s 00:18:50.955 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:50.955 ************************************ 00:18:50.955 END TEST raid_rebuild_test 00:18:50.955 ************************************ 00:18:50.955 06:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.955 06:13:52 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:18:50.955 06:13:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:50.955 06:13:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:50.955 06:13:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.219 ************************************ 00:18:51.219 START TEST raid_rebuild_test_sb 00:18:51.219 ************************************ 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true false true 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=95101 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 95101 /var/tmp/spdk-raid.sock 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 95101 ']' 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:51.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.219 06:13:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.219 [2024-08-13 06:13:52.857287] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:18:51.219 [2024-08-13 06:13:52.857495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95101 ] 00:18:51.219 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:51.219 Zero copy mechanism will not be used. 00:18:51.219 [2024-08-13 06:13:53.003612] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.479 [2024-08-13 06:13:53.050771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.479 [2024-08-13 06:13:53.093330] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.479 [2024-08-13 06:13:53.093450] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.048 06:13:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:52.048 06:13:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:18:52.048 06:13:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:52.048 06:13:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:52.048 BaseBdev1_malloc 00:18:52.308 06:13:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:52.308 [2024-08-13 06:13:54.030307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:52.308 [2024-08-13 06:13:54.030465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.308 [2024-08-13 06:13:54.030516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:52.308 [2024-08-13 06:13:54.030548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.308 [2024-08-13 06:13:54.032598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.308 [2024-08-13 06:13:54.032677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:52.308 BaseBdev1 00:18:52.308 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:52.308 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:52.568 BaseBdev2_malloc 00:18:52.568 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:52.827 [2024-08-13 06:13:54.370484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:52.827 [2024-08-13 06:13:54.370562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.827 [2024-08-13 06:13:54.370586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:52.827 [2024-08-13 06:13:54.370597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.827 [2024-08-13 06:13:54.372646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.827 [2024-08-13 06:13:54.372687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:52.827 BaseBdev2 00:18:52.827 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:52.827 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:52.827 BaseBdev3_malloc 00:18:52.827 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:53.087 [2024-08-13 06:13:54.783213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:53.087 [2024-08-13 06:13:54.783357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.087 [2024-08-13 06:13:54.783398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:53.087 [2024-08-13 06:13:54.783434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.087 [2024-08-13 06:13:54.785480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.087 [2024-08-13 06:13:54.785558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:53.087 BaseBdev3 00:18:53.087 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:18:53.087 06:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:53.347 BaseBdev4_malloc 00:18:53.347 06:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:53.607 [2024-08-13 06:13:55.187111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:53.607 [2024-08-13 06:13:55.187181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.607 [2024-08-13 06:13:55.187203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:53.607 [2024-08-13 06:13:55.187216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.607 [2024-08-13 06:13:55.189191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.607 [2024-08-13 06:13:55.189303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:53.607 BaseBdev4 00:18:53.607 06:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:53.607 spare_malloc 00:18:53.867 06:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:53.867 spare_delay 00:18:53.867 06:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:54.126 [2024-08-13 06:13:55.794576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.126 [2024-08-13 06:13:55.794642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.126 [2024-08-13 06:13:55.794663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:54.126 [2024-08-13 06:13:55.794673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.126 [2024-08-13 06:13:55.796718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.126 [2024-08-13 06:13:55.796759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.126 spare 00:18:54.126 06:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:18:54.386 [2024-08-13 06:13:55.986353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.386 [2024-08-13 06:13:55.988172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.386 [2024-08-13 06:13:55.988233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.386 [2024-08-13 06:13:55.988276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.386 [2024-08-13 06:13:55.988446] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:18:54.386 [2024-08-13 06:13:55.988459] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:54.386 [2024-08-13 06:13:55.988802] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:18:54.386 [2024-08-13 06:13:55.988954] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:18:54.386 [2024-08-13 06:13:55.988968] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:18:54.386 [2024-08-13 06:13:55.989129] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.386 "name": "raid_bdev1", 00:18:54.386 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:18:54.386 "strip_size_kb": 0, 00:18:54.386 "state": "online", 00:18:54.386 "raid_level": "raid1", 00:18:54.386 "superblock": true, 00:18:54.386 "num_base_bdevs": 4, 00:18:54.386 "num_base_bdevs_discovered": 4, 00:18:54.386 "num_base_bdevs_operational": 4, 00:18:54.386 "base_bdevs_list": [ 00:18:54.386 { 00:18:54.386 "name": "BaseBdev1", 00:18:54.386 "uuid": "548e67b9-46e1-5d38-b74a-b9b0127634d5", 00:18:54.386 "is_configured": true, 00:18:54.386 "data_offset": 2048, 00:18:54.386 "data_size": 63488 00:18:54.386 }, 00:18:54.386 { 00:18:54.386 "name": "BaseBdev2", 00:18:54.386 "uuid": "f134be7a-9a0c-5e8c-8fcc-3792b3aa00af", 00:18:54.386 "is_configured": true, 00:18:54.386 "data_offset": 2048, 00:18:54.386 "data_size": 63488 00:18:54.386 }, 00:18:54.386 { 00:18:54.386 "name": "BaseBdev3", 00:18:54.386 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:18:54.386 "is_configured": true, 00:18:54.386 "data_offset": 2048, 00:18:54.386 "data_size": 63488 00:18:54.386 }, 00:18:54.386 { 00:18:54.386 "name": "BaseBdev4", 00:18:54.386 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:18:54.386 "is_configured": true, 00:18:54.386 "data_offset": 2048, 00:18:54.386 "data_size": 63488 00:18:54.386 } 00:18:54.386 ] 00:18:54.386 }' 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.386 06:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.956 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:18:54.956 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:55.216 [2024-08-13 06:13:56.861547] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.216 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:18:55.216 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.216 06:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:55.476 [2024-08-13 06:13:57.228638] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:18:55.476 /dev/nbd0 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:55.476 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.735 1+0 records in 00:18:55.735 1+0 records out 00:18:55.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423288 s, 9.7 MB/s 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:18:55.735 06:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:01.052 63488+0 records in 00:19:01.052 63488+0 records out 00:19:01.052 32505856 bytes (33 MB, 31 MiB) copied, 4.90542 s, 6.6 MB/s 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:01.052 [2024-08-13 06:14:02.391520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:01.052 [2024-08-13 06:14:02.564421] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.052 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.052 "name": "raid_bdev1", 00:19:01.052 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:01.052 "strip_size_kb": 0, 00:19:01.052 "state": "online", 00:19:01.052 "raid_level": "raid1", 00:19:01.052 "superblock": true, 00:19:01.052 "num_base_bdevs": 4, 00:19:01.053 "num_base_bdevs_discovered": 3, 00:19:01.053 "num_base_bdevs_operational": 3, 00:19:01.053 "base_bdevs_list": [ 00:19:01.053 { 00:19:01.053 "name": null, 00:19:01.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.053 "is_configured": false, 00:19:01.053 "data_offset": 2048, 00:19:01.053 "data_size": 63488 00:19:01.053 }, 00:19:01.053 { 00:19:01.053 "name": "BaseBdev2", 00:19:01.053 "uuid": "f134be7a-9a0c-5e8c-8fcc-3792b3aa00af", 00:19:01.053 "is_configured": true, 00:19:01.053 "data_offset": 2048, 00:19:01.053 "data_size": 63488 00:19:01.053 }, 00:19:01.053 { 00:19:01.053 "name": "BaseBdev3", 00:19:01.053 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:01.053 "is_configured": true, 00:19:01.053 "data_offset": 2048, 00:19:01.053 "data_size": 63488 00:19:01.053 }, 00:19:01.053 { 00:19:01.053 "name": "BaseBdev4", 00:19:01.053 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:01.053 "is_configured": true, 00:19:01.053 "data_offset": 2048, 00:19:01.053 "data_size": 63488 00:19:01.053 } 00:19:01.053 ] 00:19:01.053 }' 00:19:01.053 06:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.053 06:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.621 06:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.881 [2024-08-13 06:14:03.490829] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.881 [2024-08-13 06:14:03.494194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:19:01.881 [2024-08-13 06:14:03.496010] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.881 06:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.819 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.079 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.079 "name": "raid_bdev1", 00:19:03.079 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:03.079 "strip_size_kb": 0, 00:19:03.079 "state": "online", 00:19:03.079 "raid_level": "raid1", 00:19:03.079 "superblock": true, 00:19:03.079 "num_base_bdevs": 4, 00:19:03.079 "num_base_bdevs_discovered": 4, 00:19:03.079 "num_base_bdevs_operational": 4, 00:19:03.079 "process": { 00:19:03.079 "type": "rebuild", 00:19:03.079 "target": "spare", 00:19:03.079 "progress": { 00:19:03.079 "blocks": 24576, 00:19:03.079 "percent": 38 00:19:03.079 } 00:19:03.079 }, 00:19:03.079 "base_bdevs_list": [ 00:19:03.079 { 00:19:03.079 "name": "spare", 00:19:03.079 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:03.079 "is_configured": true, 00:19:03.079 "data_offset": 2048, 00:19:03.079 "data_size": 63488 00:19:03.079 }, 00:19:03.079 { 00:19:03.079 "name": "BaseBdev2", 00:19:03.079 "uuid": "f134be7a-9a0c-5e8c-8fcc-3792b3aa00af", 00:19:03.079 "is_configured": true, 00:19:03.079 "data_offset": 2048, 00:19:03.079 "data_size": 63488 00:19:03.079 }, 00:19:03.079 { 00:19:03.079 "name": "BaseBdev3", 00:19:03.079 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:03.079 "is_configured": true, 00:19:03.079 "data_offset": 2048, 00:19:03.079 "data_size": 63488 00:19:03.079 }, 00:19:03.079 { 00:19:03.079 "name": "BaseBdev4", 00:19:03.079 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:03.079 "is_configured": true, 00:19:03.079 "data_offset": 2048, 00:19:03.079 "data_size": 63488 00:19:03.079 } 00:19:03.079 ] 00:19:03.079 }' 00:19:03.079 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:03.079 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.079 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:03.079 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.079 06:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:03.339 [2024-08-13 06:14:05.010771] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.339 [2024-08-13 06:14:05.101996] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.339 [2024-08-13 06:14:05.102067] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.339 [2024-08-13 06:14:05.102083] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.339 [2024-08-13 06:14:05.102092] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.598 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:03.598 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:03.598 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.598 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.598 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.599 "name": "raid_bdev1", 00:19:03.599 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:03.599 "strip_size_kb": 0, 00:19:03.599 "state": "online", 00:19:03.599 "raid_level": "raid1", 00:19:03.599 "superblock": true, 00:19:03.599 "num_base_bdevs": 4, 00:19:03.599 "num_base_bdevs_discovered": 3, 00:19:03.599 "num_base_bdevs_operational": 3, 00:19:03.599 "base_bdevs_list": [ 00:19:03.599 { 00:19:03.599 "name": null, 00:19:03.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.599 "is_configured": false, 00:19:03.599 "data_offset": 2048, 00:19:03.599 "data_size": 63488 00:19:03.599 }, 00:19:03.599 { 00:19:03.599 "name": "BaseBdev2", 00:19:03.599 "uuid": "f134be7a-9a0c-5e8c-8fcc-3792b3aa00af", 00:19:03.599 "is_configured": true, 00:19:03.599 "data_offset": 2048, 00:19:03.599 "data_size": 63488 00:19:03.599 }, 00:19:03.599 { 00:19:03.599 "name": "BaseBdev3", 00:19:03.599 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:03.599 "is_configured": true, 00:19:03.599 "data_offset": 2048, 00:19:03.599 "data_size": 63488 00:19:03.599 }, 00:19:03.599 { 00:19:03.599 "name": "BaseBdev4", 00:19:03.599 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:03.599 "is_configured": true, 00:19:03.599 "data_offset": 2048, 00:19:03.599 "data_size": 63488 00:19:03.599 } 00:19:03.599 ] 00:19:03.599 }' 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.599 06:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.168 06:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.428 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.428 "name": "raid_bdev1", 00:19:04.428 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:04.428 "strip_size_kb": 0, 00:19:04.428 "state": "online", 00:19:04.428 "raid_level": "raid1", 00:19:04.428 "superblock": true, 00:19:04.428 "num_base_bdevs": 4, 00:19:04.428 "num_base_bdevs_discovered": 3, 00:19:04.428 "num_base_bdevs_operational": 3, 00:19:04.428 "base_bdevs_list": [ 00:19:04.428 { 00:19:04.428 "name": null, 00:19:04.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.428 "is_configured": false, 00:19:04.428 "data_offset": 2048, 00:19:04.428 "data_size": 63488 00:19:04.428 }, 00:19:04.428 { 00:19:04.428 "name": "BaseBdev2", 00:19:04.428 "uuid": "f134be7a-9a0c-5e8c-8fcc-3792b3aa00af", 00:19:04.428 "is_configured": true, 00:19:04.428 "data_offset": 2048, 00:19:04.428 "data_size": 63488 00:19:04.428 }, 00:19:04.428 { 00:19:04.428 "name": "BaseBdev3", 00:19:04.428 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:04.428 "is_configured": true, 00:19:04.428 "data_offset": 2048, 00:19:04.428 "data_size": 63488 00:19:04.428 }, 00:19:04.428 { 00:19:04.428 "name": "BaseBdev4", 00:19:04.428 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:04.428 "is_configured": true, 00:19:04.428 "data_offset": 2048, 00:19:04.428 "data_size": 63488 00:19:04.428 } 00:19:04.428 ] 00:19:04.428 }' 00:19:04.428 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:04.428 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:04.428 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:04.428 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:04.428 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.688 [2024-08-13 06:14:06.343652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.688 [2024-08-13 06:14:06.346958] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:19:04.688 [2024-08-13 06:14:06.348656] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.688 06:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:19:05.627 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.627 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:05.628 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:05.628 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:05.628 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:05.628 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.628 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.888 "name": "raid_bdev1", 00:19:05.888 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:05.888 "strip_size_kb": 0, 00:19:05.888 "state": "online", 00:19:05.888 "raid_level": "raid1", 00:19:05.888 "superblock": true, 00:19:05.888 "num_base_bdevs": 4, 00:19:05.888 "num_base_bdevs_discovered": 4, 00:19:05.888 "num_base_bdevs_operational": 4, 00:19:05.888 "process": { 00:19:05.888 "type": "rebuild", 00:19:05.888 "target": "spare", 00:19:05.888 "progress": { 00:19:05.888 "blocks": 22528, 00:19:05.888 "percent": 35 00:19:05.888 } 00:19:05.888 }, 00:19:05.888 "base_bdevs_list": [ 00:19:05.888 { 00:19:05.888 "name": "spare", 00:19:05.888 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:05.888 "is_configured": true, 00:19:05.888 "data_offset": 2048, 00:19:05.888 "data_size": 63488 00:19:05.888 }, 00:19:05.888 { 00:19:05.888 "name": "BaseBdev2", 00:19:05.888 "uuid": "f134be7a-9a0c-5e8c-8fcc-3792b3aa00af", 00:19:05.888 "is_configured": true, 00:19:05.888 "data_offset": 2048, 00:19:05.888 "data_size": 63488 00:19:05.888 }, 00:19:05.888 { 00:19:05.888 "name": "BaseBdev3", 00:19:05.888 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:05.888 "is_configured": true, 00:19:05.888 "data_offset": 2048, 00:19:05.888 "data_size": 63488 00:19:05.888 }, 00:19:05.888 { 00:19:05.888 "name": "BaseBdev4", 00:19:05.888 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:05.888 "is_configured": true, 00:19:05.888 "data_offset": 2048, 00:19:05.888 "data_size": 63488 00:19:05.888 } 00:19:05.888 ] 00:19:05.888 }' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:19:05.888 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:19:05.888 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:06.148 [2024-08-13 06:14:07.798931] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:06.408 [2024-08-13 06:14:07.953586] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.408 06:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.408 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.408 "name": "raid_bdev1", 00:19:06.408 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:06.408 "strip_size_kb": 0, 00:19:06.408 "state": "online", 00:19:06.408 "raid_level": "raid1", 00:19:06.408 "superblock": true, 00:19:06.408 "num_base_bdevs": 4, 00:19:06.408 "num_base_bdevs_discovered": 3, 00:19:06.408 "num_base_bdevs_operational": 3, 00:19:06.408 "process": { 00:19:06.408 "type": "rebuild", 00:19:06.408 "target": "spare", 00:19:06.408 "progress": { 00:19:06.408 "blocks": 32768, 00:19:06.408 "percent": 51 00:19:06.408 } 00:19:06.408 }, 00:19:06.408 "base_bdevs_list": [ 00:19:06.408 { 00:19:06.408 "name": "spare", 00:19:06.408 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:06.408 "is_configured": true, 00:19:06.408 "data_offset": 2048, 00:19:06.408 "data_size": 63488 00:19:06.408 }, 00:19:06.408 { 00:19:06.408 "name": null, 00:19:06.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.408 "is_configured": false, 00:19:06.408 "data_offset": 2048, 00:19:06.408 "data_size": 63488 00:19:06.408 }, 00:19:06.408 { 00:19:06.408 "name": "BaseBdev3", 00:19:06.408 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:06.408 "is_configured": true, 00:19:06.408 "data_offset": 2048, 00:19:06.408 "data_size": 63488 00:19:06.408 }, 00:19:06.408 { 00:19:06.408 "name": "BaseBdev4", 00:19:06.408 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:06.408 "is_configured": true, 00:19:06.408 "data_offset": 2048, 00:19:06.408 "data_size": 63488 00:19:06.408 } 00:19:06.408 ] 00:19:06.408 }' 00:19:06.408 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:06.408 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=800 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.668 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.668 "name": "raid_bdev1", 00:19:06.668 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:06.668 "strip_size_kb": 0, 00:19:06.668 "state": "online", 00:19:06.668 "raid_level": "raid1", 00:19:06.668 "superblock": true, 00:19:06.668 "num_base_bdevs": 4, 00:19:06.668 "num_base_bdevs_discovered": 3, 00:19:06.668 "num_base_bdevs_operational": 3, 00:19:06.668 "process": { 00:19:06.668 "type": "rebuild", 00:19:06.668 "target": "spare", 00:19:06.668 "progress": { 00:19:06.668 "blocks": 38912, 00:19:06.669 "percent": 61 00:19:06.669 } 00:19:06.669 }, 00:19:06.669 "base_bdevs_list": [ 00:19:06.669 { 00:19:06.669 "name": "spare", 00:19:06.669 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:06.669 "is_configured": true, 00:19:06.669 "data_offset": 2048, 00:19:06.669 "data_size": 63488 00:19:06.669 }, 00:19:06.669 { 00:19:06.669 "name": null, 00:19:06.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.669 "is_configured": false, 00:19:06.669 "data_offset": 2048, 00:19:06.669 "data_size": 63488 00:19:06.669 }, 00:19:06.669 { 00:19:06.669 "name": "BaseBdev3", 00:19:06.669 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:06.669 "is_configured": true, 00:19:06.669 "data_offset": 2048, 00:19:06.669 "data_size": 63488 00:19:06.669 }, 00:19:06.669 { 00:19:06.669 "name": "BaseBdev4", 00:19:06.669 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:06.669 "is_configured": true, 00:19:06.669 "data_offset": 2048, 00:19:06.669 "data_size": 63488 00:19:06.669 } 00:19:06.669 ] 00:19:06.669 }' 00:19:06.669 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:06.928 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.928 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:06.928 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.928 06:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.868 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.868 [2024-08-13 06:14:09.559274] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:07.868 [2024-08-13 06:14:09.559350] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:07.868 [2024-08-13 06:14:09.559449] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.128 "name": "raid_bdev1", 00:19:08.128 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:08.128 "strip_size_kb": 0, 00:19:08.128 "state": "online", 00:19:08.128 "raid_level": "raid1", 00:19:08.128 "superblock": true, 00:19:08.128 "num_base_bdevs": 4, 00:19:08.128 "num_base_bdevs_discovered": 3, 00:19:08.128 "num_base_bdevs_operational": 3, 00:19:08.128 "base_bdevs_list": [ 00:19:08.128 { 00:19:08.128 "name": "spare", 00:19:08.128 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:08.128 "is_configured": true, 00:19:08.128 "data_offset": 2048, 00:19:08.128 "data_size": 63488 00:19:08.128 }, 00:19:08.128 { 00:19:08.128 "name": null, 00:19:08.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.128 "is_configured": false, 00:19:08.128 "data_offset": 2048, 00:19:08.128 "data_size": 63488 00:19:08.128 }, 00:19:08.128 { 00:19:08.128 "name": "BaseBdev3", 00:19:08.128 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:08.128 "is_configured": true, 00:19:08.128 "data_offset": 2048, 00:19:08.128 "data_size": 63488 00:19:08.128 }, 00:19:08.128 { 00:19:08.128 "name": "BaseBdev4", 00:19:08.128 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:08.128 "is_configured": true, 00:19:08.128 "data_offset": 2048, 00:19:08.128 "data_size": 63488 00:19:08.128 } 00:19:08.128 ] 00:19:08.128 }' 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:08.128 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:08.129 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.129 06:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.388 "name": "raid_bdev1", 00:19:08.388 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:08.388 "strip_size_kb": 0, 00:19:08.388 "state": "online", 00:19:08.388 "raid_level": "raid1", 00:19:08.388 "superblock": true, 00:19:08.388 "num_base_bdevs": 4, 00:19:08.388 "num_base_bdevs_discovered": 3, 00:19:08.388 "num_base_bdevs_operational": 3, 00:19:08.388 "base_bdevs_list": [ 00:19:08.388 { 00:19:08.388 "name": "spare", 00:19:08.388 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:08.388 "is_configured": true, 00:19:08.388 "data_offset": 2048, 00:19:08.388 "data_size": 63488 00:19:08.388 }, 00:19:08.388 { 00:19:08.388 "name": null, 00:19:08.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.388 "is_configured": false, 00:19:08.388 "data_offset": 2048, 00:19:08.388 "data_size": 63488 00:19:08.388 }, 00:19:08.388 { 00:19:08.388 "name": "BaseBdev3", 00:19:08.388 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:08.388 "is_configured": true, 00:19:08.388 "data_offset": 2048, 00:19:08.388 "data_size": 63488 00:19:08.388 }, 00:19:08.388 { 00:19:08.388 "name": "BaseBdev4", 00:19:08.388 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:08.388 "is_configured": true, 00:19:08.388 "data_offset": 2048, 00:19:08.388 "data_size": 63488 00:19:08.388 } 00:19:08.388 ] 00:19:08.388 }' 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.388 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.648 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.648 "name": "raid_bdev1", 00:19:08.649 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:08.649 "strip_size_kb": 0, 00:19:08.649 "state": "online", 00:19:08.649 "raid_level": "raid1", 00:19:08.649 "superblock": true, 00:19:08.649 "num_base_bdevs": 4, 00:19:08.649 "num_base_bdevs_discovered": 3, 00:19:08.649 "num_base_bdevs_operational": 3, 00:19:08.649 "base_bdevs_list": [ 00:19:08.649 { 00:19:08.649 "name": "spare", 00:19:08.649 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:08.649 "is_configured": true, 00:19:08.649 "data_offset": 2048, 00:19:08.649 "data_size": 63488 00:19:08.649 }, 00:19:08.649 { 00:19:08.649 "name": null, 00:19:08.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.649 "is_configured": false, 00:19:08.649 "data_offset": 2048, 00:19:08.649 "data_size": 63488 00:19:08.649 }, 00:19:08.649 { 00:19:08.649 "name": "BaseBdev3", 00:19:08.649 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:08.649 "is_configured": true, 00:19:08.649 "data_offset": 2048, 00:19:08.649 "data_size": 63488 00:19:08.649 }, 00:19:08.649 { 00:19:08.649 "name": "BaseBdev4", 00:19:08.649 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:08.649 "is_configured": true, 00:19:08.649 "data_offset": 2048, 00:19:08.649 "data_size": 63488 00:19:08.649 } 00:19:08.649 ] 00:19:08.649 }' 00:19:08.649 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.649 06:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.242 06:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:09.517 [2024-08-13 06:14:11.072059] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.517 [2024-08-13 06:14:11.072098] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.517 [2024-08-13 06:14:11.072164] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.517 [2024-08-13 06:14:11.072239] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.517 [2024-08-13 06:14:11.072249] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:19:09.517 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:19:09.517 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:09.788 /dev/nbd0 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.788 1+0 records in 00:19:09.788 1+0 records out 00:19:09.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541697 s, 7.6 MB/s 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:09.788 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:10.047 /dev/nbd1 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.047 1+0 records in 00:19:10.047 1+0 records out 00:19:10.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377688 s, 10.8 MB/s 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:10.047 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.306 06:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:19:10.565 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:10.824 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:11.084 [2024-08-13 06:14:12.711146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.084 [2024-08-13 06:14:12.711206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.084 [2024-08-13 06:14:12.711238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:11.084 [2024-08-13 06:14:12.711247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.084 [2024-08-13 06:14:12.713227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.084 [2024-08-13 06:14:12.713263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.084 [2024-08-13 06:14:12.713331] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:11.084 [2024-08-13 06:14:12.713367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.084 [2024-08-13 06:14:12.713485] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.084 [2024-08-13 06:14:12.713576] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:11.084 spare 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.084 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.084 [2024-08-13 06:14:12.813489] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:19:11.084 [2024-08-13 06:14:12.813513] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:11.084 [2024-08-13 06:14:12.813760] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:19:11.084 [2024-08-13 06:14:12.813887] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:19:11.084 [2024-08-13 06:14:12.813905] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:19:11.084 [2024-08-13 06:14:12.814031] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.344 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.344 "name": "raid_bdev1", 00:19:11.344 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:11.344 "strip_size_kb": 0, 00:19:11.344 "state": "online", 00:19:11.344 "raid_level": "raid1", 00:19:11.344 "superblock": true, 00:19:11.344 "num_base_bdevs": 4, 00:19:11.344 "num_base_bdevs_discovered": 3, 00:19:11.344 "num_base_bdevs_operational": 3, 00:19:11.344 "base_bdevs_list": [ 00:19:11.344 { 00:19:11.344 "name": "spare", 00:19:11.344 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:11.344 "is_configured": true, 00:19:11.344 "data_offset": 2048, 00:19:11.344 "data_size": 63488 00:19:11.344 }, 00:19:11.344 { 00:19:11.344 "name": null, 00:19:11.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.344 "is_configured": false, 00:19:11.344 "data_offset": 2048, 00:19:11.344 "data_size": 63488 00:19:11.344 }, 00:19:11.344 { 00:19:11.344 "name": "BaseBdev3", 00:19:11.344 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:11.344 "is_configured": true, 00:19:11.344 "data_offset": 2048, 00:19:11.344 "data_size": 63488 00:19:11.344 }, 00:19:11.344 { 00:19:11.344 "name": "BaseBdev4", 00:19:11.344 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:11.344 "is_configured": true, 00:19:11.344 "data_offset": 2048, 00:19:11.344 "data_size": 63488 00:19:11.344 } 00:19:11.344 ] 00:19:11.344 }' 00:19:11.344 06:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.344 06:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.912 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.174 "name": "raid_bdev1", 00:19:12.174 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:12.174 "strip_size_kb": 0, 00:19:12.174 "state": "online", 00:19:12.174 "raid_level": "raid1", 00:19:12.174 "superblock": true, 00:19:12.174 "num_base_bdevs": 4, 00:19:12.174 "num_base_bdevs_discovered": 3, 00:19:12.174 "num_base_bdevs_operational": 3, 00:19:12.174 "base_bdevs_list": [ 00:19:12.174 { 00:19:12.174 "name": "spare", 00:19:12.174 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:12.174 "is_configured": true, 00:19:12.174 "data_offset": 2048, 00:19:12.174 "data_size": 63488 00:19:12.174 }, 00:19:12.174 { 00:19:12.174 "name": null, 00:19:12.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.174 "is_configured": false, 00:19:12.174 "data_offset": 2048, 00:19:12.174 "data_size": 63488 00:19:12.174 }, 00:19:12.174 { 00:19:12.174 "name": "BaseBdev3", 00:19:12.174 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:12.174 "is_configured": true, 00:19:12.174 "data_offset": 2048, 00:19:12.174 "data_size": 63488 00:19:12.174 }, 00:19:12.174 { 00:19:12.174 "name": "BaseBdev4", 00:19:12.174 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:12.174 "is_configured": true, 00:19:12.174 "data_offset": 2048, 00:19:12.174 "data_size": 63488 00:19:12.174 } 00:19:12.174 ] 00:19:12.174 }' 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:12.174 06:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.433 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.433 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:12.692 [2024-08-13 06:14:14.240793] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.692 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.692 "name": "raid_bdev1", 00:19:12.692 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:12.692 "strip_size_kb": 0, 00:19:12.692 "state": "online", 00:19:12.692 "raid_level": "raid1", 00:19:12.692 "superblock": true, 00:19:12.692 "num_base_bdevs": 4, 00:19:12.692 "num_base_bdevs_discovered": 2, 00:19:12.692 "num_base_bdevs_operational": 2, 00:19:12.692 "base_bdevs_list": [ 00:19:12.692 { 00:19:12.692 "name": null, 00:19:12.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.692 "is_configured": false, 00:19:12.692 "data_offset": 2048, 00:19:12.692 "data_size": 63488 00:19:12.692 }, 00:19:12.692 { 00:19:12.692 "name": null, 00:19:12.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.692 "is_configured": false, 00:19:12.692 "data_offset": 2048, 00:19:12.692 "data_size": 63488 00:19:12.692 }, 00:19:12.692 { 00:19:12.692 "name": "BaseBdev3", 00:19:12.692 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:12.692 "is_configured": true, 00:19:12.692 "data_offset": 2048, 00:19:12.692 "data_size": 63488 00:19:12.692 }, 00:19:12.692 { 00:19:12.692 "name": "BaseBdev4", 00:19:12.692 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:12.692 "is_configured": true, 00:19:12.692 "data_offset": 2048, 00:19:12.692 "data_size": 63488 00:19:12.693 } 00:19:12.693 ] 00:19:12.693 }' 00:19:12.693 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.693 06:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.260 06:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.520 [2024-08-13 06:14:15.147291] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.520 [2024-08-13 06:14:15.147495] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:13.520 [2024-08-13 06:14:15.147508] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:13.520 [2024-08-13 06:14:15.147558] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.520 [2024-08-13 06:14:15.150800] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:19:13.520 [2024-08-13 06:14:15.152665] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.520 06:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.457 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.717 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.717 "name": "raid_bdev1", 00:19:14.717 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:14.717 "strip_size_kb": 0, 00:19:14.717 "state": "online", 00:19:14.717 "raid_level": "raid1", 00:19:14.717 "superblock": true, 00:19:14.717 "num_base_bdevs": 4, 00:19:14.717 "num_base_bdevs_discovered": 3, 00:19:14.717 "num_base_bdevs_operational": 3, 00:19:14.717 "process": { 00:19:14.717 "type": "rebuild", 00:19:14.717 "target": "spare", 00:19:14.717 "progress": { 00:19:14.717 "blocks": 22528, 00:19:14.717 "percent": 35 00:19:14.717 } 00:19:14.717 }, 00:19:14.717 "base_bdevs_list": [ 00:19:14.717 { 00:19:14.717 "name": "spare", 00:19:14.717 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:14.717 "is_configured": true, 00:19:14.717 "data_offset": 2048, 00:19:14.717 "data_size": 63488 00:19:14.717 }, 00:19:14.717 { 00:19:14.717 "name": null, 00:19:14.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.717 "is_configured": false, 00:19:14.717 "data_offset": 2048, 00:19:14.717 "data_size": 63488 00:19:14.717 }, 00:19:14.717 { 00:19:14.717 "name": "BaseBdev3", 00:19:14.717 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:14.717 "is_configured": true, 00:19:14.717 "data_offset": 2048, 00:19:14.717 "data_size": 63488 00:19:14.717 }, 00:19:14.717 { 00:19:14.717 "name": "BaseBdev4", 00:19:14.717 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:14.717 "is_configured": true, 00:19:14.717 "data_offset": 2048, 00:19:14.717 "data_size": 63488 00:19:14.717 } 00:19:14.717 ] 00:19:14.717 }' 00:19:14.717 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:14.717 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.717 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:14.717 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.717 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:14.976 [2024-08-13 06:14:16.662951] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.976 [2024-08-13 06:14:16.758058] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:14.976 [2024-08-13 06:14:16.758112] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.976 [2024-08-13 06:14:16.758130] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.977 [2024-08-13 06:14:16.758137] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.236 06:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.236 06:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.236 "name": "raid_bdev1", 00:19:15.236 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:15.236 "strip_size_kb": 0, 00:19:15.236 "state": "online", 00:19:15.236 "raid_level": "raid1", 00:19:15.236 "superblock": true, 00:19:15.236 "num_base_bdevs": 4, 00:19:15.236 "num_base_bdevs_discovered": 2, 00:19:15.236 "num_base_bdevs_operational": 2, 00:19:15.236 "base_bdevs_list": [ 00:19:15.236 { 00:19:15.236 "name": null, 00:19:15.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.236 "is_configured": false, 00:19:15.236 "data_offset": 2048, 00:19:15.236 "data_size": 63488 00:19:15.236 }, 00:19:15.236 { 00:19:15.236 "name": null, 00:19:15.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.236 "is_configured": false, 00:19:15.236 "data_offset": 2048, 00:19:15.236 "data_size": 63488 00:19:15.236 }, 00:19:15.236 { 00:19:15.236 "name": "BaseBdev3", 00:19:15.236 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:15.236 "is_configured": true, 00:19:15.236 "data_offset": 2048, 00:19:15.236 "data_size": 63488 00:19:15.236 }, 00:19:15.236 { 00:19:15.236 "name": "BaseBdev4", 00:19:15.236 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:15.236 "is_configured": true, 00:19:15.236 "data_offset": 2048, 00:19:15.236 "data_size": 63488 00:19:15.236 } 00:19:15.236 ] 00:19:15.236 }' 00:19:15.236 06:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.236 06:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.804 06:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:16.064 [2024-08-13 06:14:17.748159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.064 [2024-08-13 06:14:17.748220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.064 [2024-08-13 06:14:17.748249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:16.064 [2024-08-13 06:14:17.748257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.064 [2024-08-13 06:14:17.748660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.064 [2024-08-13 06:14:17.748689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.064 [2024-08-13 06:14:17.748769] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:16.064 [2024-08-13 06:14:17.748785] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:16.064 [2024-08-13 06:14:17.748803] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:16.064 [2024-08-13 06:14:17.748837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.064 [2024-08-13 06:14:17.751881] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:19:16.064 [2024-08-13 06:14:17.753552] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.064 spare 00:19:16.064 06:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:19:17.001 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.001 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:17.001 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:17.001 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:17.001 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:17.260 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.260 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.260 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.260 "name": "raid_bdev1", 00:19:17.260 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:17.260 "strip_size_kb": 0, 00:19:17.260 "state": "online", 00:19:17.260 "raid_level": "raid1", 00:19:17.260 "superblock": true, 00:19:17.260 "num_base_bdevs": 4, 00:19:17.260 "num_base_bdevs_discovered": 3, 00:19:17.260 "num_base_bdevs_operational": 3, 00:19:17.260 "process": { 00:19:17.260 "type": "rebuild", 00:19:17.260 "target": "spare", 00:19:17.260 "progress": { 00:19:17.260 "blocks": 24576, 00:19:17.260 "percent": 38 00:19:17.260 } 00:19:17.260 }, 00:19:17.260 "base_bdevs_list": [ 00:19:17.260 { 00:19:17.260 "name": "spare", 00:19:17.260 "uuid": "23127882-1060-5b4f-8bd2-dcd34d76f6f4", 00:19:17.260 "is_configured": true, 00:19:17.260 "data_offset": 2048, 00:19:17.260 "data_size": 63488 00:19:17.260 }, 00:19:17.260 { 00:19:17.260 "name": null, 00:19:17.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.260 "is_configured": false, 00:19:17.260 "data_offset": 2048, 00:19:17.260 "data_size": 63488 00:19:17.260 }, 00:19:17.260 { 00:19:17.260 "name": "BaseBdev3", 00:19:17.260 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:17.260 "is_configured": true, 00:19:17.260 "data_offset": 2048, 00:19:17.260 "data_size": 63488 00:19:17.260 }, 00:19:17.260 { 00:19:17.260 "name": "BaseBdev4", 00:19:17.260 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:17.260 "is_configured": true, 00:19:17.260 "data_offset": 2048, 00:19:17.260 "data_size": 63488 00:19:17.260 } 00:19:17.260 ] 00:19:17.260 }' 00:19:17.260 06:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:17.261 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.261 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:17.261 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.261 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:17.520 [2024-08-13 06:14:19.191854] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.520 [2024-08-13 06:14:19.258414] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.520 [2024-08-13 06:14:19.258475] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.520 [2024-08-13 06:14:19.258490] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.520 [2024-08-13 06:14:19.258509] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.520 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.779 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.779 "name": "raid_bdev1", 00:19:17.779 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:17.779 "strip_size_kb": 0, 00:19:17.779 "state": "online", 00:19:17.779 "raid_level": "raid1", 00:19:17.779 "superblock": true, 00:19:17.779 "num_base_bdevs": 4, 00:19:17.779 "num_base_bdevs_discovered": 2, 00:19:17.779 "num_base_bdevs_operational": 2, 00:19:17.779 "base_bdevs_list": [ 00:19:17.779 { 00:19:17.779 "name": null, 00:19:17.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.779 "is_configured": false, 00:19:17.779 "data_offset": 2048, 00:19:17.779 "data_size": 63488 00:19:17.779 }, 00:19:17.779 { 00:19:17.779 "name": null, 00:19:17.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.779 "is_configured": false, 00:19:17.779 "data_offset": 2048, 00:19:17.779 "data_size": 63488 00:19:17.779 }, 00:19:17.779 { 00:19:17.779 "name": "BaseBdev3", 00:19:17.779 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:17.779 "is_configured": true, 00:19:17.779 "data_offset": 2048, 00:19:17.779 "data_size": 63488 00:19:17.779 }, 00:19:17.779 { 00:19:17.779 "name": "BaseBdev4", 00:19:17.779 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:17.779 "is_configured": true, 00:19:17.779 "data_offset": 2048, 00:19:17.779 "data_size": 63488 00:19:17.779 } 00:19:17.779 ] 00:19:17.779 }' 00:19:17.780 06:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.780 06:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.348 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.607 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.607 "name": "raid_bdev1", 00:19:18.607 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:18.607 "strip_size_kb": 0, 00:19:18.607 "state": "online", 00:19:18.607 "raid_level": "raid1", 00:19:18.607 "superblock": true, 00:19:18.607 "num_base_bdevs": 4, 00:19:18.607 "num_base_bdevs_discovered": 2, 00:19:18.607 "num_base_bdevs_operational": 2, 00:19:18.607 "base_bdevs_list": [ 00:19:18.607 { 00:19:18.607 "name": null, 00:19:18.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.607 "is_configured": false, 00:19:18.607 "data_offset": 2048, 00:19:18.607 "data_size": 63488 00:19:18.607 }, 00:19:18.607 { 00:19:18.607 "name": null, 00:19:18.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.607 "is_configured": false, 00:19:18.607 "data_offset": 2048, 00:19:18.607 "data_size": 63488 00:19:18.607 }, 00:19:18.607 { 00:19:18.607 "name": "BaseBdev3", 00:19:18.607 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:18.607 "is_configured": true, 00:19:18.607 "data_offset": 2048, 00:19:18.607 "data_size": 63488 00:19:18.607 }, 00:19:18.607 { 00:19:18.607 "name": "BaseBdev4", 00:19:18.607 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:18.607 "is_configured": true, 00:19:18.607 "data_offset": 2048, 00:19:18.607 "data_size": 63488 00:19:18.607 } 00:19:18.607 ] 00:19:18.607 }' 00:19:18.607 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:18.607 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:18.607 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:18.607 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:18.607 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:18.866 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:19.125 [2024-08-13 06:14:20.715083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:19.125 [2024-08-13 06:14:20.715135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.125 [2024-08-13 06:14:20.715153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:19.125 [2024-08-13 06:14:20.715163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.125 [2024-08-13 06:14:20.715528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.125 [2024-08-13 06:14:20.715552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.125 [2024-08-13 06:14:20.715614] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:19.126 [2024-08-13 06:14:20.715633] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:19.126 [2024-08-13 06:14:20.715648] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:19.126 BaseBdev1 00:19:19.126 06:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.063 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.322 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.322 "name": "raid_bdev1", 00:19:20.322 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:20.322 "strip_size_kb": 0, 00:19:20.322 "state": "online", 00:19:20.322 "raid_level": "raid1", 00:19:20.322 "superblock": true, 00:19:20.322 "num_base_bdevs": 4, 00:19:20.322 "num_base_bdevs_discovered": 2, 00:19:20.322 "num_base_bdevs_operational": 2, 00:19:20.322 "base_bdevs_list": [ 00:19:20.322 { 00:19:20.322 "name": null, 00:19:20.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.322 "is_configured": false, 00:19:20.322 "data_offset": 2048, 00:19:20.322 "data_size": 63488 00:19:20.322 }, 00:19:20.322 { 00:19:20.322 "name": null, 00:19:20.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.322 "is_configured": false, 00:19:20.322 "data_offset": 2048, 00:19:20.322 "data_size": 63488 00:19:20.322 }, 00:19:20.322 { 00:19:20.322 "name": "BaseBdev3", 00:19:20.322 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:20.322 "is_configured": true, 00:19:20.322 "data_offset": 2048, 00:19:20.322 "data_size": 63488 00:19:20.322 }, 00:19:20.322 { 00:19:20.322 "name": "BaseBdev4", 00:19:20.322 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:20.322 "is_configured": true, 00:19:20.322 "data_offset": 2048, 00:19:20.322 "data_size": 63488 00:19:20.322 } 00:19:20.322 ] 00:19:20.322 }' 00:19:20.322 06:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.322 06:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.891 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.151 "name": "raid_bdev1", 00:19:21.151 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:21.151 "strip_size_kb": 0, 00:19:21.151 "state": "online", 00:19:21.151 "raid_level": "raid1", 00:19:21.151 "superblock": true, 00:19:21.151 "num_base_bdevs": 4, 00:19:21.151 "num_base_bdevs_discovered": 2, 00:19:21.151 "num_base_bdevs_operational": 2, 00:19:21.151 "base_bdevs_list": [ 00:19:21.151 { 00:19:21.151 "name": null, 00:19:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.151 "is_configured": false, 00:19:21.151 "data_offset": 2048, 00:19:21.151 "data_size": 63488 00:19:21.151 }, 00:19:21.151 { 00:19:21.151 "name": null, 00:19:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.151 "is_configured": false, 00:19:21.151 "data_offset": 2048, 00:19:21.151 "data_size": 63488 00:19:21.151 }, 00:19:21.151 { 00:19:21.151 "name": "BaseBdev3", 00:19:21.151 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:21.151 "is_configured": true, 00:19:21.151 "data_offset": 2048, 00:19:21.151 "data_size": 63488 00:19:21.151 }, 00:19:21.151 { 00:19:21.151 "name": "BaseBdev4", 00:19:21.151 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:21.151 "is_configured": true, 00:19:21.151 "data_offset": 2048, 00:19:21.151 "data_size": 63488 00:19:21.151 } 00:19:21.151 ] 00:19:21.151 }' 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:21.151 06:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.411 [2024-08-13 06:14:22.995299] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.411 [2024-08-13 06:14:22.995451] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:21.411 [2024-08-13 06:14:22.995468] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.411 request: 00:19:21.411 { 00:19:21.411 "base_bdev": "BaseBdev1", 00:19:21.411 "raid_bdev": "raid_bdev1", 00:19:21.411 "method": "bdev_raid_add_base_bdev", 00:19:21.411 "req_id": 1 00:19:21.411 } 00:19:21.411 Got JSON-RPC error response 00:19:21.411 response: 00:19:21.411 { 00:19:21.411 "code": -22, 00:19:21.411 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:21.411 } 00:19:21.411 06:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:19:21.411 06:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:19:21.411 06:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:19:21.411 06:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:19:21.411 06:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.349 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.609 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.609 "name": "raid_bdev1", 00:19:22.609 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:22.609 "strip_size_kb": 0, 00:19:22.609 "state": "online", 00:19:22.609 "raid_level": "raid1", 00:19:22.609 "superblock": true, 00:19:22.609 "num_base_bdevs": 4, 00:19:22.609 "num_base_bdevs_discovered": 2, 00:19:22.609 "num_base_bdevs_operational": 2, 00:19:22.609 "base_bdevs_list": [ 00:19:22.609 { 00:19:22.609 "name": null, 00:19:22.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.609 "is_configured": false, 00:19:22.609 "data_offset": 2048, 00:19:22.609 "data_size": 63488 00:19:22.609 }, 00:19:22.609 { 00:19:22.609 "name": null, 00:19:22.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.609 "is_configured": false, 00:19:22.609 "data_offset": 2048, 00:19:22.609 "data_size": 63488 00:19:22.609 }, 00:19:22.609 { 00:19:22.609 "name": "BaseBdev3", 00:19:22.609 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:22.609 "is_configured": true, 00:19:22.609 "data_offset": 2048, 00:19:22.609 "data_size": 63488 00:19:22.609 }, 00:19:22.609 { 00:19:22.609 "name": "BaseBdev4", 00:19:22.609 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:22.609 "is_configured": true, 00:19:22.609 "data_offset": 2048, 00:19:22.609 "data_size": 63488 00:19:22.609 } 00:19:22.609 ] 00:19:22.609 }' 00:19:22.609 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.609 06:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.179 06:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:23.440 "name": "raid_bdev1", 00:19:23.440 "uuid": "b9b41ab1-fa68-454d-887a-eeafef1e0c3c", 00:19:23.440 "strip_size_kb": 0, 00:19:23.440 "state": "online", 00:19:23.440 "raid_level": "raid1", 00:19:23.440 "superblock": true, 00:19:23.440 "num_base_bdevs": 4, 00:19:23.440 "num_base_bdevs_discovered": 2, 00:19:23.440 "num_base_bdevs_operational": 2, 00:19:23.440 "base_bdevs_list": [ 00:19:23.440 { 00:19:23.440 "name": null, 00:19:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.440 "is_configured": false, 00:19:23.440 "data_offset": 2048, 00:19:23.440 "data_size": 63488 00:19:23.440 }, 00:19:23.440 { 00:19:23.440 "name": null, 00:19:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.440 "is_configured": false, 00:19:23.440 "data_offset": 2048, 00:19:23.440 "data_size": 63488 00:19:23.440 }, 00:19:23.440 { 00:19:23.440 "name": "BaseBdev3", 00:19:23.440 "uuid": "ae8c0e5f-589c-57e5-80ad-bc4b6ead1a0c", 00:19:23.440 "is_configured": true, 00:19:23.440 "data_offset": 2048, 00:19:23.440 "data_size": 63488 00:19:23.440 }, 00:19:23.440 { 00:19:23.440 "name": "BaseBdev4", 00:19:23.440 "uuid": "b028f4ab-577c-5e09-8894-92e73deeda52", 00:19:23.440 "is_configured": true, 00:19:23.440 "data_offset": 2048, 00:19:23.440 "data_size": 63488 00:19:23.440 } 00:19:23.440 ] 00:19:23.440 }' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 95101 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 95101 ']' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 95101 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95101 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:23.440 killing process with pid 95101 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95101' 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 95101 00:19:23.440 Received shutdown signal, test time was about 60.000000 seconds 00:19:23.440 00:19:23.440 Latency(us) 00:19:23.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.440 =================================================================================================================== 00:19:23.440 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.440 [2024-08-13 06:14:25.154676] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.440 [2024-08-13 06:14:25.154783] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.440 [2024-08-13 06:14:25.154847] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.440 [2024-08-13 06:14:25.154858] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:19:23.440 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 95101 00:19:23.440 [2024-08-13 06:14:25.205143] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.701 06:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:19:23.701 00:19:23.701 real 0m32.689s 00:19:23.701 user 0m47.840s 00:19:23.701 sys 0m5.409s 00:19:23.701 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:23.701 06:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 ************************************ 00:19:23.701 END TEST raid_rebuild_test_sb 00:19:23.701 ************************************ 00:19:23.962 06:14:25 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:19:23.962 06:14:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:19:23.962 06:14:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:23.962 06:14:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.962 ************************************ 00:19:23.962 START TEST raid_rebuild_test_io 00:19:23.962 ************************************ 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false true true 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=95950 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 95950 /var/tmp/spdk-raid.sock 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 95950 ']' 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:23.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:23.962 06:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.962 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:23.962 Zero copy mechanism will not be used. 00:19:23.962 [2024-08-13 06:14:25.629670] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:19:23.962 [2024-08-13 06:14:25.629810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95950 ] 00:19:24.222 [2024-08-13 06:14:25.771249] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.222 [2024-08-13 06:14:25.816689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.222 [2024-08-13 06:14:25.859391] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.222 [2024-08-13 06:14:25.859437] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.791 06:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:24.791 06:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:19:24.791 06:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:24.791 06:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:25.050 BaseBdev1_malloc 00:19:25.050 06:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:25.050 [2024-08-13 06:14:26.819676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:25.050 [2024-08-13 06:14:26.819777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.050 [2024-08-13 06:14:26.819814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:25.050 [2024-08-13 06:14:26.819828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.050 [2024-08-13 06:14:26.821767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.050 [2024-08-13 06:14:26.821813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:25.050 BaseBdev1 00:19:25.050 06:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:25.050 06:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:25.310 BaseBdev2_malloc 00:19:25.310 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:25.569 [2024-08-13 06:14:27.235398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:25.569 [2024-08-13 06:14:27.235497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.569 [2024-08-13 06:14:27.235535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:25.569 [2024-08-13 06:14:27.235567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.569 [2024-08-13 06:14:27.237485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.569 [2024-08-13 06:14:27.237563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:25.569 BaseBdev2 00:19:25.569 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:25.569 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:25.828 BaseBdev3_malloc 00:19:25.828 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:26.087 [2024-08-13 06:14:27.662124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:26.087 [2024-08-13 06:14:27.662222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.087 [2024-08-13 06:14:27.662258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:26.087 [2024-08-13 06:14:27.662287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.087 [2024-08-13 06:14:27.664171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.087 [2024-08-13 06:14:27.664240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:26.087 BaseBdev3 00:19:26.087 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:26.087 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:26.347 BaseBdev4_malloc 00:19:26.347 06:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:26.347 [2024-08-13 06:14:28.086121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:26.347 [2024-08-13 06:14:28.086209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.347 [2024-08-13 06:14:28.086241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:26.347 [2024-08-13 06:14:28.086272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.347 [2024-08-13 06:14:28.088218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.347 [2024-08-13 06:14:28.088287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:26.347 BaseBdev4 00:19:26.347 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:26.607 spare_malloc 00:19:26.607 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:26.866 spare_delay 00:19:26.866 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:27.126 [2024-08-13 06:14:28.682091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:27.126 [2024-08-13 06:14:28.682140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.126 [2024-08-13 06:14:28.682156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:27.126 [2024-08-13 06:14:28.682166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.126 [2024-08-13 06:14:28.684056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.126 [2024-08-13 06:14:28.684092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:27.126 spare 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:27.126 [2024-08-13 06:14:28.845964] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.126 [2024-08-13 06:14:28.847685] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.126 [2024-08-13 06:14:28.847738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:27.126 [2024-08-13 06:14:28.847779] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:27.126 [2024-08-13 06:14:28.847854] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:19:27.126 [2024-08-13 06:14:28.847865] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:27.126 [2024-08-13 06:14:28.848129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:19:27.126 [2024-08-13 06:14:28.848250] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:19:27.126 [2024-08-13 06:14:28.848265] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:19:27.126 [2024-08-13 06:14:28.848375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.126 06:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.386 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.386 "name": "raid_bdev1", 00:19:27.386 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:27.386 "strip_size_kb": 0, 00:19:27.386 "state": "online", 00:19:27.386 "raid_level": "raid1", 00:19:27.386 "superblock": false, 00:19:27.386 "num_base_bdevs": 4, 00:19:27.386 "num_base_bdevs_discovered": 4, 00:19:27.386 "num_base_bdevs_operational": 4, 00:19:27.386 "base_bdevs_list": [ 00:19:27.386 { 00:19:27.386 "name": "BaseBdev1", 00:19:27.386 "uuid": "1ec6468e-b07f-5128-8384-fcb2f5ba0927", 00:19:27.386 "is_configured": true, 00:19:27.386 "data_offset": 0, 00:19:27.386 "data_size": 65536 00:19:27.386 }, 00:19:27.386 { 00:19:27.386 "name": "BaseBdev2", 00:19:27.386 "uuid": "7fdc76f1-e36f-5594-81c6-afc5ef192b77", 00:19:27.386 "is_configured": true, 00:19:27.386 "data_offset": 0, 00:19:27.386 "data_size": 65536 00:19:27.386 }, 00:19:27.386 { 00:19:27.386 "name": "BaseBdev3", 00:19:27.386 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:27.386 "is_configured": true, 00:19:27.386 "data_offset": 0, 00:19:27.386 "data_size": 65536 00:19:27.386 }, 00:19:27.386 { 00:19:27.386 "name": "BaseBdev4", 00:19:27.386 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:27.386 "is_configured": true, 00:19:27.386 "data_offset": 0, 00:19:27.386 "data_size": 65536 00:19:27.386 } 00:19:27.386 ] 00:19:27.386 }' 00:19:27.386 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.386 06:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.955 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:19:27.955 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:28.215 [2024-08-13 06:14:29.828628] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.215 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:19:28.215 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:28.215 06:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.475 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:19:28.475 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:19:28.475 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:28.475 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:28.475 [2024-08-13 06:14:30.141760] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:19:28.475 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:28.475 Zero copy mechanism will not be used. 00:19:28.475 Running I/O for 60 seconds... 00:19:28.475 [2024-08-13 06:14:30.245295] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.475 [2024-08-13 06:14:30.255345] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.735 "name": "raid_bdev1", 00:19:28.735 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:28.735 "strip_size_kb": 0, 00:19:28.735 "state": "online", 00:19:28.735 "raid_level": "raid1", 00:19:28.735 "superblock": false, 00:19:28.735 "num_base_bdevs": 4, 00:19:28.735 "num_base_bdevs_discovered": 3, 00:19:28.735 "num_base_bdevs_operational": 3, 00:19:28.735 "base_bdevs_list": [ 00:19:28.735 { 00:19:28.735 "name": null, 00:19:28.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.735 "is_configured": false, 00:19:28.735 "data_offset": 0, 00:19:28.735 "data_size": 65536 00:19:28.735 }, 00:19:28.735 { 00:19:28.735 "name": "BaseBdev2", 00:19:28.735 "uuid": "7fdc76f1-e36f-5594-81c6-afc5ef192b77", 00:19:28.735 "is_configured": true, 00:19:28.735 "data_offset": 0, 00:19:28.735 "data_size": 65536 00:19:28.735 }, 00:19:28.735 { 00:19:28.735 "name": "BaseBdev3", 00:19:28.735 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:28.735 "is_configured": true, 00:19:28.735 "data_offset": 0, 00:19:28.735 "data_size": 65536 00:19:28.735 }, 00:19:28.735 { 00:19:28.735 "name": "BaseBdev4", 00:19:28.735 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:28.735 "is_configured": true, 00:19:28.735 "data_offset": 0, 00:19:28.735 "data_size": 65536 00:19:28.735 } 00:19:28.735 ] 00:19:28.735 }' 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.735 06:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.305 06:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:29.565 [2024-08-13 06:14:31.200189] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.565 [2024-08-13 06:14:31.250509] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:19:29.565 [2024-08-13 06:14:31.252357] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.565 06:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:29.825 [2024-08-13 06:14:31.383788] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:29.825 [2024-08-13 06:14:31.504248] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:29.825 [2024-08-13 06:14:31.504876] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:30.084 [2024-08-13 06:14:31.832361] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:30.344 [2024-08-13 06:14:32.061181] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:30.344 [2024-08-13 06:14:32.061833] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.608 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.880 [2024-08-13 06:14:32.428568] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:30.880 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.880 "name": "raid_bdev1", 00:19:30.880 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:30.880 "strip_size_kb": 0, 00:19:30.880 "state": "online", 00:19:30.880 "raid_level": "raid1", 00:19:30.880 "superblock": false, 00:19:30.880 "num_base_bdevs": 4, 00:19:30.880 "num_base_bdevs_discovered": 4, 00:19:30.880 "num_base_bdevs_operational": 4, 00:19:30.880 "process": { 00:19:30.880 "type": "rebuild", 00:19:30.880 "target": "spare", 00:19:30.880 "progress": { 00:19:30.880 "blocks": 14336, 00:19:30.880 "percent": 21 00:19:30.880 } 00:19:30.880 }, 00:19:30.880 "base_bdevs_list": [ 00:19:30.880 { 00:19:30.880 "name": "spare", 00:19:30.880 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:30.880 "is_configured": true, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 65536 00:19:30.880 }, 00:19:30.880 { 00:19:30.880 "name": "BaseBdev2", 00:19:30.880 "uuid": "7fdc76f1-e36f-5594-81c6-afc5ef192b77", 00:19:30.880 "is_configured": true, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 65536 00:19:30.880 }, 00:19:30.880 { 00:19:30.880 "name": "BaseBdev3", 00:19:30.880 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:30.880 "is_configured": true, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 65536 00:19:30.880 }, 00:19:30.880 { 00:19:30.880 "name": "BaseBdev4", 00:19:30.880 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:30.880 "is_configured": true, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 65536 00:19:30.880 } 00:19:30.880 ] 00:19:30.880 }' 00:19:30.880 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:30.880 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.880 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:30.880 [2024-08-13 06:14:32.544308] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:30.880 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.880 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:31.160 [2024-08-13 06:14:32.758780] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.160 [2024-08-13 06:14:32.786659] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:31.160 [2024-08-13 06:14:32.893305] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.160 [2024-08-13 06:14:32.908440] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.160 [2024-08-13 06:14:32.908537] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.160 [2024-08-13 06:14:32.908557] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:31.160 [2024-08-13 06:14:32.929774] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.439 06:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.439 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.439 "name": "raid_bdev1", 00:19:31.439 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:31.439 "strip_size_kb": 0, 00:19:31.439 "state": "online", 00:19:31.439 "raid_level": "raid1", 00:19:31.439 "superblock": false, 00:19:31.439 "num_base_bdevs": 4, 00:19:31.439 "num_base_bdevs_discovered": 3, 00:19:31.439 "num_base_bdevs_operational": 3, 00:19:31.439 "base_bdevs_list": [ 00:19:31.439 { 00:19:31.439 "name": null, 00:19:31.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.439 "is_configured": false, 00:19:31.439 "data_offset": 0, 00:19:31.439 "data_size": 65536 00:19:31.439 }, 00:19:31.439 { 00:19:31.439 "name": "BaseBdev2", 00:19:31.439 "uuid": "7fdc76f1-e36f-5594-81c6-afc5ef192b77", 00:19:31.439 "is_configured": true, 00:19:31.439 "data_offset": 0, 00:19:31.439 "data_size": 65536 00:19:31.439 }, 00:19:31.439 { 00:19:31.439 "name": "BaseBdev3", 00:19:31.439 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:31.439 "is_configured": true, 00:19:31.439 "data_offset": 0, 00:19:31.439 "data_size": 65536 00:19:31.439 }, 00:19:31.439 { 00:19:31.439 "name": "BaseBdev4", 00:19:31.439 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:31.439 "is_configured": true, 00:19:31.439 "data_offset": 0, 00:19:31.439 "data_size": 65536 00:19:31.439 } 00:19:31.439 ] 00:19:31.439 }' 00:19:31.439 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.439 06:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.008 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.267 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.267 "name": "raid_bdev1", 00:19:32.268 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:32.268 "strip_size_kb": 0, 00:19:32.268 "state": "online", 00:19:32.268 "raid_level": "raid1", 00:19:32.268 "superblock": false, 00:19:32.268 "num_base_bdevs": 4, 00:19:32.268 "num_base_bdevs_discovered": 3, 00:19:32.268 "num_base_bdevs_operational": 3, 00:19:32.268 "base_bdevs_list": [ 00:19:32.268 { 00:19:32.268 "name": null, 00:19:32.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.268 "is_configured": false, 00:19:32.268 "data_offset": 0, 00:19:32.268 "data_size": 65536 00:19:32.268 }, 00:19:32.268 { 00:19:32.268 "name": "BaseBdev2", 00:19:32.268 "uuid": "7fdc76f1-e36f-5594-81c6-afc5ef192b77", 00:19:32.268 "is_configured": true, 00:19:32.268 "data_offset": 0, 00:19:32.268 "data_size": 65536 00:19:32.268 }, 00:19:32.268 { 00:19:32.268 "name": "BaseBdev3", 00:19:32.268 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:32.268 "is_configured": true, 00:19:32.268 "data_offset": 0, 00:19:32.268 "data_size": 65536 00:19:32.268 }, 00:19:32.268 { 00:19:32.268 "name": "BaseBdev4", 00:19:32.268 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:32.268 "is_configured": true, 00:19:32.268 "data_offset": 0, 00:19:32.268 "data_size": 65536 00:19:32.268 } 00:19:32.268 ] 00:19:32.268 }' 00:19:32.268 06:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:32.268 06:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:32.268 06:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:32.528 06:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:32.528 06:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.528 [2024-08-13 06:14:34.247181] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.528 [2024-08-13 06:14:34.280298] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:19:32.528 [2024-08-13 06:14:34.282480] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.528 06:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:19:32.787 [2024-08-13 06:14:34.397178] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:32.787 [2024-08-13 06:14:34.397652] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:32.787 [2024-08-13 06:14:34.537213] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:32.787 [2024-08-13 06:14:34.537433] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:33.357 [2024-08-13 06:14:34.909770] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.617 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.877 "name": "raid_bdev1", 00:19:33.877 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:33.877 "strip_size_kb": 0, 00:19:33.877 "state": "online", 00:19:33.877 "raid_level": "raid1", 00:19:33.877 "superblock": false, 00:19:33.877 "num_base_bdevs": 4, 00:19:33.877 "num_base_bdevs_discovered": 4, 00:19:33.877 "num_base_bdevs_operational": 4, 00:19:33.877 "process": { 00:19:33.877 "type": "rebuild", 00:19:33.877 "target": "spare", 00:19:33.877 "progress": { 00:19:33.877 "blocks": 18432, 00:19:33.877 "percent": 28 00:19:33.877 } 00:19:33.877 }, 00:19:33.877 "base_bdevs_list": [ 00:19:33.877 { 00:19:33.877 "name": "spare", 00:19:33.877 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:33.877 "is_configured": true, 00:19:33.877 "data_offset": 0, 00:19:33.877 "data_size": 65536 00:19:33.877 }, 00:19:33.877 { 00:19:33.877 "name": "BaseBdev2", 00:19:33.877 "uuid": "7fdc76f1-e36f-5594-81c6-afc5ef192b77", 00:19:33.877 "is_configured": true, 00:19:33.877 "data_offset": 0, 00:19:33.877 "data_size": 65536 00:19:33.877 }, 00:19:33.877 { 00:19:33.877 "name": "BaseBdev3", 00:19:33.877 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:33.877 "is_configured": true, 00:19:33.877 "data_offset": 0, 00:19:33.877 "data_size": 65536 00:19:33.877 }, 00:19:33.877 { 00:19:33.877 "name": "BaseBdev4", 00:19:33.877 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:33.877 "is_configured": true, 00:19:33.877 "data_offset": 0, 00:19:33.877 "data_size": 65536 00:19:33.877 } 00:19:33.877 ] 00:19:33.877 }' 00:19:33.877 [2024-08-13 06:14:35.487439] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:33.877 [2024-08-13 06:14:35.488559] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:19:33.877 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:34.137 [2024-08-13 06:14:35.759123] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:34.397 [2024-08-13 06:14:35.934065] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:19:34.397 [2024-08-13 06:14:35.934207] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.397 06:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.397 [2024-08-13 06:14:36.044284] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:34.397 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.397 "name": "raid_bdev1", 00:19:34.397 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:34.397 "strip_size_kb": 0, 00:19:34.397 "state": "online", 00:19:34.397 "raid_level": "raid1", 00:19:34.397 "superblock": false, 00:19:34.397 "num_base_bdevs": 4, 00:19:34.397 "num_base_bdevs_discovered": 3, 00:19:34.397 "num_base_bdevs_operational": 3, 00:19:34.397 "process": { 00:19:34.397 "type": "rebuild", 00:19:34.397 "target": "spare", 00:19:34.397 "progress": { 00:19:34.397 "blocks": 26624, 00:19:34.397 "percent": 40 00:19:34.397 } 00:19:34.397 }, 00:19:34.397 "base_bdevs_list": [ 00:19:34.397 { 00:19:34.397 "name": "spare", 00:19:34.397 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:34.397 "is_configured": true, 00:19:34.397 "data_offset": 0, 00:19:34.397 "data_size": 65536 00:19:34.397 }, 00:19:34.397 { 00:19:34.397 "name": null, 00:19:34.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.397 "is_configured": false, 00:19:34.397 "data_offset": 0, 00:19:34.397 "data_size": 65536 00:19:34.397 }, 00:19:34.397 { 00:19:34.397 "name": "BaseBdev3", 00:19:34.397 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:34.397 "is_configured": true, 00:19:34.397 "data_offset": 0, 00:19:34.397 "data_size": 65536 00:19:34.397 }, 00:19:34.397 { 00:19:34.397 "name": "BaseBdev4", 00:19:34.397 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:34.397 "is_configured": true, 00:19:34.397 "data_offset": 0, 00:19:34.397 "data_size": 65536 00:19:34.397 } 00:19:34.397 ] 00:19:34.397 }' 00:19:34.397 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=828 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.657 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.657 [2024-08-13 06:14:36.382538] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:34.917 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.917 "name": "raid_bdev1", 00:19:34.917 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:34.917 "strip_size_kb": 0, 00:19:34.917 "state": "online", 00:19:34.917 "raid_level": "raid1", 00:19:34.917 "superblock": false, 00:19:34.917 "num_base_bdevs": 4, 00:19:34.917 "num_base_bdevs_discovered": 3, 00:19:34.917 "num_base_bdevs_operational": 3, 00:19:34.917 "process": { 00:19:34.917 "type": "rebuild", 00:19:34.917 "target": "spare", 00:19:34.917 "progress": { 00:19:34.917 "blocks": 32768, 00:19:34.917 "percent": 50 00:19:34.917 } 00:19:34.917 }, 00:19:34.917 "base_bdevs_list": [ 00:19:34.917 { 00:19:34.917 "name": "spare", 00:19:34.917 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:34.917 "is_configured": true, 00:19:34.917 "data_offset": 0, 00:19:34.917 "data_size": 65536 00:19:34.917 }, 00:19:34.917 { 00:19:34.917 "name": null, 00:19:34.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.917 "is_configured": false, 00:19:34.917 "data_offset": 0, 00:19:34.917 "data_size": 65536 00:19:34.917 }, 00:19:34.917 { 00:19:34.917 "name": "BaseBdev3", 00:19:34.917 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:34.917 "is_configured": true, 00:19:34.917 "data_offset": 0, 00:19:34.917 "data_size": 65536 00:19:34.917 }, 00:19:34.917 { 00:19:34.917 "name": "BaseBdev4", 00:19:34.917 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:34.917 "is_configured": true, 00:19:34.917 "data_offset": 0, 00:19:34.917 "data_size": 65536 00:19:34.917 } 00:19:34.917 ] 00:19:34.917 }' 00:19:34.917 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:34.917 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.917 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:34.917 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.917 06:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:34.917 [2024-08-13 06:14:36.597454] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:35.177 [2024-08-13 06:14:36.917812] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:35.437 [2024-08-13 06:14:37.037158] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:35.696 [2024-08-13 06:14:37.357709] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.956 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.215 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.215 "name": "raid_bdev1", 00:19:36.215 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:36.215 "strip_size_kb": 0, 00:19:36.215 "state": "online", 00:19:36.215 "raid_level": "raid1", 00:19:36.215 "superblock": false, 00:19:36.215 "num_base_bdevs": 4, 00:19:36.215 "num_base_bdevs_discovered": 3, 00:19:36.215 "num_base_bdevs_operational": 3, 00:19:36.215 "process": { 00:19:36.215 "type": "rebuild", 00:19:36.215 "target": "spare", 00:19:36.215 "progress": { 00:19:36.215 "blocks": 51200, 00:19:36.215 "percent": 78 00:19:36.215 } 00:19:36.215 }, 00:19:36.215 "base_bdevs_list": [ 00:19:36.215 { 00:19:36.215 "name": "spare", 00:19:36.215 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:36.215 "is_configured": true, 00:19:36.215 "data_offset": 0, 00:19:36.215 "data_size": 65536 00:19:36.215 }, 00:19:36.215 { 00:19:36.215 "name": null, 00:19:36.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.215 "is_configured": false, 00:19:36.215 "data_offset": 0, 00:19:36.215 "data_size": 65536 00:19:36.215 }, 00:19:36.215 { 00:19:36.215 "name": "BaseBdev3", 00:19:36.215 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:36.215 "is_configured": true, 00:19:36.215 "data_offset": 0, 00:19:36.215 "data_size": 65536 00:19:36.215 }, 00:19:36.215 { 00:19:36.215 "name": "BaseBdev4", 00:19:36.215 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:36.215 "is_configured": true, 00:19:36.215 "data_offset": 0, 00:19:36.215 "data_size": 65536 00:19:36.215 } 00:19:36.215 ] 00:19:36.215 }' 00:19:36.215 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:36.215 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.215 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:36.215 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.216 06:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:36.784 [2024-08-13 06:14:38.445356] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:36.784 [2024-08-13 06:14:38.550015] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:36.784 [2024-08-13 06:14:38.552065] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.354 06:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.354 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.354 "name": "raid_bdev1", 00:19:37.354 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:37.354 "strip_size_kb": 0, 00:19:37.354 "state": "online", 00:19:37.354 "raid_level": "raid1", 00:19:37.354 "superblock": false, 00:19:37.354 "num_base_bdevs": 4, 00:19:37.354 "num_base_bdevs_discovered": 3, 00:19:37.354 "num_base_bdevs_operational": 3, 00:19:37.354 "base_bdevs_list": [ 00:19:37.354 { 00:19:37.354 "name": "spare", 00:19:37.354 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:37.354 "is_configured": true, 00:19:37.354 "data_offset": 0, 00:19:37.354 "data_size": 65536 00:19:37.354 }, 00:19:37.354 { 00:19:37.354 "name": null, 00:19:37.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.354 "is_configured": false, 00:19:37.354 "data_offset": 0, 00:19:37.354 "data_size": 65536 00:19:37.354 }, 00:19:37.354 { 00:19:37.354 "name": "BaseBdev3", 00:19:37.354 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:37.354 "is_configured": true, 00:19:37.354 "data_offset": 0, 00:19:37.354 "data_size": 65536 00:19:37.354 }, 00:19:37.354 { 00:19:37.354 "name": "BaseBdev4", 00:19:37.354 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:37.354 "is_configured": true, 00:19:37.354 "data_offset": 0, 00:19:37.354 "data_size": 65536 00:19:37.354 } 00:19:37.354 ] 00:19:37.354 }' 00:19:37.354 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:37.614 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:37.614 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:37.614 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:37.614 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.615 "name": "raid_bdev1", 00:19:37.615 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:37.615 "strip_size_kb": 0, 00:19:37.615 "state": "online", 00:19:37.615 "raid_level": "raid1", 00:19:37.615 "superblock": false, 00:19:37.615 "num_base_bdevs": 4, 00:19:37.615 "num_base_bdevs_discovered": 3, 00:19:37.615 "num_base_bdevs_operational": 3, 00:19:37.615 "base_bdevs_list": [ 00:19:37.615 { 00:19:37.615 "name": "spare", 00:19:37.615 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:37.615 "is_configured": true, 00:19:37.615 "data_offset": 0, 00:19:37.615 "data_size": 65536 00:19:37.615 }, 00:19:37.615 { 00:19:37.615 "name": null, 00:19:37.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.615 "is_configured": false, 00:19:37.615 "data_offset": 0, 00:19:37.615 "data_size": 65536 00:19:37.615 }, 00:19:37.615 { 00:19:37.615 "name": "BaseBdev3", 00:19:37.615 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:37.615 "is_configured": true, 00:19:37.615 "data_offset": 0, 00:19:37.615 "data_size": 65536 00:19:37.615 }, 00:19:37.615 { 00:19:37.615 "name": "BaseBdev4", 00:19:37.615 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:37.615 "is_configured": true, 00:19:37.615 "data_offset": 0, 00:19:37.615 "data_size": 65536 00:19:37.615 } 00:19:37.615 ] 00:19:37.615 }' 00:19:37.615 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.875 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.135 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.135 "name": "raid_bdev1", 00:19:38.135 "uuid": "7c92c012-3e7b-4bcd-b601-cbb0fd4f5728", 00:19:38.135 "strip_size_kb": 0, 00:19:38.135 "state": "online", 00:19:38.135 "raid_level": "raid1", 00:19:38.135 "superblock": false, 00:19:38.135 "num_base_bdevs": 4, 00:19:38.135 "num_base_bdevs_discovered": 3, 00:19:38.135 "num_base_bdevs_operational": 3, 00:19:38.135 "base_bdevs_list": [ 00:19:38.135 { 00:19:38.135 "name": "spare", 00:19:38.135 "uuid": "78384d1f-bde2-56ec-bddb-6621aee454b8", 00:19:38.135 "is_configured": true, 00:19:38.135 "data_offset": 0, 00:19:38.135 "data_size": 65536 00:19:38.135 }, 00:19:38.135 { 00:19:38.135 "name": null, 00:19:38.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.135 "is_configured": false, 00:19:38.135 "data_offset": 0, 00:19:38.135 "data_size": 65536 00:19:38.135 }, 00:19:38.135 { 00:19:38.135 "name": "BaseBdev3", 00:19:38.135 "uuid": "3d940301-7779-5da2-a03a-bf24c3e80250", 00:19:38.135 "is_configured": true, 00:19:38.135 "data_offset": 0, 00:19:38.135 "data_size": 65536 00:19:38.135 }, 00:19:38.135 { 00:19:38.135 "name": "BaseBdev4", 00:19:38.135 "uuid": "9776d7c2-b061-59d3-90a9-f3f07a89a86b", 00:19:38.135 "is_configured": true, 00:19:38.135 "data_offset": 0, 00:19:38.135 "data_size": 65536 00:19:38.135 } 00:19:38.135 ] 00:19:38.135 }' 00:19:38.135 06:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.135 06:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.705 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:38.705 [2024-08-13 06:14:40.427885] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.705 [2024-08-13 06:14:40.427930] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.705 00:19:38.705 Latency(us) 00:19:38.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.705 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:38.705 raid_bdev1 : 10.35 112.83 338.49 0.00 0.00 12594.54 279.03 109436.53 00:19:38.705 =================================================================================================================== 00:19:38.705 Total : 112.83 338.49 0.00 0.00 12594.54 279.03 109436.53 00:19:38.705 0 00:19:38.705 [2024-08-13 06:14:40.478606] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.705 [2024-08-13 06:14:40.478647] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.705 [2024-08-13 06:14:40.478731] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.705 [2024-08-13 06:14:40.478741] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:38.964 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:38.965 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:39.224 /dev/nbd0 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.224 1+0 records in 00:19:39.224 1+0 records out 00:19:39.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592312 s, 6.9 MB/s 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.224 06:14:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:39.484 /dev/nbd1 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.484 1+0 records in 00:19:39.484 1+0 records out 00:19:39.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478586 s, 8.6 MB/s 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:19:39.484 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.485 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:39.485 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:19:39.485 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.485 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.485 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:39.744 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:40.004 /dev/nbd1 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.004 1+0 records in 00:19:40.004 1+0 records out 00:19:40.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039137 s, 10.5 MB/s 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.004 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.264 06:14:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.264 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 95950 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 95950 ']' 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 95950 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95950 00:19:40.524 killing process with pid 95950 00:19:40.524 Received shutdown signal, test time was about 12.177399 seconds 00:19:40.524 00:19:40.524 Latency(us) 00:19:40.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.524 =================================================================================================================== 00:19:40.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95950' 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 95950 00:19:40.524 [2024-08-13 06:14:42.297734] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.524 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 95950 00:19:40.785 [2024-08-13 06:14:42.341730] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.785 06:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:19:40.785 00:19:40.785 real 0m17.047s 00:19:40.785 user 0m26.339s 00:19:40.785 sys 0m2.708s 00:19:40.785 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:40.785 06:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.785 ************************************ 00:19:40.785 END TEST raid_rebuild_test_io 00:19:40.785 ************************************ 00:19:41.045 06:14:42 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:19:41.045 06:14:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:19:41.045 06:14:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:41.045 06:14:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:41.045 ************************************ 00:19:41.045 START TEST raid_rebuild_test_sb_io 00:19:41.045 ************************************ 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true true true 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=96402 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 96402 /var/tmp/spdk-raid.sock 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 96402 ']' 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:41.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:41.045 06:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.045 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:41.045 Zero copy mechanism will not be used. 00:19:41.045 [2024-08-13 06:14:42.757727] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:19:41.045 [2024-08-13 06:14:42.757888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96402 ] 00:19:41.305 [2024-08-13 06:14:42.902738] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.305 [2024-08-13 06:14:42.949989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.305 [2024-08-13 06:14:42.992404] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.305 [2024-08-13 06:14:42.992448] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.875 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:41.875 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:19:41.875 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:41.875 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:42.133 BaseBdev1_malloc 00:19:42.133 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:42.393 [2024-08-13 06:14:43.963826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:42.393 [2024-08-13 06:14:43.963886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.393 [2024-08-13 06:14:43.963912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:42.393 [2024-08-13 06:14:43.963923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.393 [2024-08-13 06:14:43.966181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.393 [2024-08-13 06:14:43.966219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:42.393 BaseBdev1 00:19:42.393 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:42.393 06:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:42.393 BaseBdev2_malloc 00:19:42.652 06:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:42.652 [2024-08-13 06:14:44.371520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:42.652 [2024-08-13 06:14:44.371571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.652 [2024-08-13 06:14:44.371590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:42.652 [2024-08-13 06:14:44.371600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.652 [2024-08-13 06:14:44.373518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.652 [2024-08-13 06:14:44.373567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:42.652 BaseBdev2 00:19:42.652 06:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:42.653 06:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:42.912 BaseBdev3_malloc 00:19:42.912 06:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:43.171 [2024-08-13 06:14:44.802685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:43.171 [2024-08-13 06:14:44.802733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.171 [2024-08-13 06:14:44.802753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:43.171 [2024-08-13 06:14:44.802762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.171 [2024-08-13 06:14:44.804682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.171 [2024-08-13 06:14:44.804715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:43.171 BaseBdev3 00:19:43.171 06:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:19:43.171 06:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:43.431 BaseBdev4_malloc 00:19:43.431 06:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:43.431 [2024-08-13 06:14:45.218432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:43.431 [2024-08-13 06:14:45.218482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.431 [2024-08-13 06:14:45.218499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:43.431 [2024-08-13 06:14:45.218512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.431 [2024-08-13 06:14:45.220521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.431 [2024-08-13 06:14:45.220558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:43.691 BaseBdev4 00:19:43.691 06:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:43.691 spare_malloc 00:19:43.691 06:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.950 spare_delay 00:19:43.950 06:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:44.210 [2024-08-13 06:14:45.797873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:44.210 [2024-08-13 06:14:45.797920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.210 [2024-08-13 06:14:45.797935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:44.210 [2024-08-13 06:14:45.797946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.210 [2024-08-13 06:14:45.799768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.210 [2024-08-13 06:14:45.799806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:44.210 spare 00:19:44.210 06:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:44.470 [2024-08-13 06:14:46.005604] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.470 [2024-08-13 06:14:46.007361] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.470 [2024-08-13 06:14:46.007431] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.470 [2024-08-13 06:14:46.007472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:44.470 [2024-08-13 06:14:46.007635] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:19:44.470 [2024-08-13 06:14:46.007655] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:44.470 [2024-08-13 06:14:46.007894] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:19:44.470 [2024-08-13 06:14:46.008024] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:19:44.470 [2024-08-13 06:14:46.008051] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:19:44.470 [2024-08-13 06:14:46.008166] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.470 "name": "raid_bdev1", 00:19:44.470 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:44.470 "strip_size_kb": 0, 00:19:44.470 "state": "online", 00:19:44.470 "raid_level": "raid1", 00:19:44.470 "superblock": true, 00:19:44.470 "num_base_bdevs": 4, 00:19:44.470 "num_base_bdevs_discovered": 4, 00:19:44.470 "num_base_bdevs_operational": 4, 00:19:44.470 "base_bdevs_list": [ 00:19:44.470 { 00:19:44.470 "name": "BaseBdev1", 00:19:44.470 "uuid": "03cd9e03-2c86-5f23-87ed-64552dce85bc", 00:19:44.470 "is_configured": true, 00:19:44.470 "data_offset": 2048, 00:19:44.470 "data_size": 63488 00:19:44.470 }, 00:19:44.470 { 00:19:44.470 "name": "BaseBdev2", 00:19:44.470 "uuid": "14b2fe5c-59c1-5198-9046-09c14591ef86", 00:19:44.470 "is_configured": true, 00:19:44.470 "data_offset": 2048, 00:19:44.470 "data_size": 63488 00:19:44.470 }, 00:19:44.470 { 00:19:44.470 "name": "BaseBdev3", 00:19:44.470 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:44.470 "is_configured": true, 00:19:44.470 "data_offset": 2048, 00:19:44.470 "data_size": 63488 00:19:44.470 }, 00:19:44.470 { 00:19:44.470 "name": "BaseBdev4", 00:19:44.470 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:44.470 "is_configured": true, 00:19:44.470 "data_offset": 2048, 00:19:44.470 "data_size": 63488 00:19:44.470 } 00:19:44.470 ] 00:19:44.470 }' 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.470 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.039 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:45.039 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:19:45.299 [2024-08-13 06:14:46.912334] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.299 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:19:45.299 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:45.299 06:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.559 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:19:45.559 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:19:45.559 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:45.559 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:45.559 [2024-08-13 06:14:47.229432] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:19:45.559 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:45.559 Zero copy mechanism will not be used. 00:19:45.559 Running I/O for 60 seconds... 00:19:45.559 [2024-08-13 06:14:47.319529] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.559 [2024-08-13 06:14:47.319787] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.818 "name": "raid_bdev1", 00:19:45.818 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:45.818 "strip_size_kb": 0, 00:19:45.818 "state": "online", 00:19:45.818 "raid_level": "raid1", 00:19:45.818 "superblock": true, 00:19:45.818 "num_base_bdevs": 4, 00:19:45.818 "num_base_bdevs_discovered": 3, 00:19:45.818 "num_base_bdevs_operational": 3, 00:19:45.818 "base_bdevs_list": [ 00:19:45.818 { 00:19:45.818 "name": null, 00:19:45.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.818 "is_configured": false, 00:19:45.818 "data_offset": 2048, 00:19:45.818 "data_size": 63488 00:19:45.818 }, 00:19:45.818 { 00:19:45.818 "name": "BaseBdev2", 00:19:45.818 "uuid": "14b2fe5c-59c1-5198-9046-09c14591ef86", 00:19:45.818 "is_configured": true, 00:19:45.818 "data_offset": 2048, 00:19:45.818 "data_size": 63488 00:19:45.818 }, 00:19:45.818 { 00:19:45.818 "name": "BaseBdev3", 00:19:45.818 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:45.818 "is_configured": true, 00:19:45.818 "data_offset": 2048, 00:19:45.818 "data_size": 63488 00:19:45.818 }, 00:19:45.818 { 00:19:45.818 "name": "BaseBdev4", 00:19:45.818 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:45.818 "is_configured": true, 00:19:45.818 "data_offset": 2048, 00:19:45.818 "data_size": 63488 00:19:45.818 } 00:19:45.818 ] 00:19:45.818 }' 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.818 06:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.387 06:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.647 [2024-08-13 06:14:48.302994] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.647 06:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:46.647 [2024-08-13 06:14:48.359542] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:19:46.647 [2024-08-13 06:14:48.361439] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:46.907 [2024-08-13 06:14:48.479356] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:46.907 [2024-08-13 06:14:48.480666] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:46.907 [2024-08-13 06:14:48.689274] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:46.907 [2024-08-13 06:14:48.689947] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:47.477 [2024-08-13 06:14:49.013840] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:47.477 [2024-08-13 06:14:49.014455] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:47.477 [2024-08-13 06:14:49.218452] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.737 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.997 [2024-08-13 06:14:49.568024] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:47.997 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.997 "name": "raid_bdev1", 00:19:47.997 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:47.997 "strip_size_kb": 0, 00:19:47.997 "state": "online", 00:19:47.997 "raid_level": "raid1", 00:19:47.997 "superblock": true, 00:19:47.997 "num_base_bdevs": 4, 00:19:47.997 "num_base_bdevs_discovered": 4, 00:19:47.997 "num_base_bdevs_operational": 4, 00:19:47.997 "process": { 00:19:47.997 "type": "rebuild", 00:19:47.997 "target": "spare", 00:19:47.997 "progress": { 00:19:47.997 "blocks": 12288, 00:19:47.997 "percent": 19 00:19:47.997 } 00:19:47.997 }, 00:19:47.997 "base_bdevs_list": [ 00:19:47.997 { 00:19:47.997 "name": "spare", 00:19:47.997 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:47.997 "is_configured": true, 00:19:47.997 "data_offset": 2048, 00:19:47.997 "data_size": 63488 00:19:47.997 }, 00:19:47.997 { 00:19:47.997 "name": "BaseBdev2", 00:19:47.997 "uuid": "14b2fe5c-59c1-5198-9046-09c14591ef86", 00:19:47.997 "is_configured": true, 00:19:47.997 "data_offset": 2048, 00:19:47.997 "data_size": 63488 00:19:47.997 }, 00:19:47.997 { 00:19:47.997 "name": "BaseBdev3", 00:19:47.997 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:47.997 "is_configured": true, 00:19:47.997 "data_offset": 2048, 00:19:47.997 "data_size": 63488 00:19:47.997 }, 00:19:47.997 { 00:19:47.997 "name": "BaseBdev4", 00:19:47.997 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:47.997 "is_configured": true, 00:19:47.997 "data_offset": 2048, 00:19:47.997 "data_size": 63488 00:19:47.997 } 00:19:47.997 ] 00:19:47.997 }' 00:19:47.997 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:47.997 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.997 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:47.997 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.997 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:47.997 [2024-08-13 06:14:49.687592] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:47.997 [2024-08-13 06:14:49.688270] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:48.257 [2024-08-13 06:14:49.857681] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.257 [2024-08-13 06:14:49.943369] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.257 [2024-08-13 06:14:49.953577] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.257 [2024-08-13 06:14:49.953717] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.257 [2024-08-13 06:14:49.953736] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:48.257 [2024-08-13 06:14:49.970389] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.257 06:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.257 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.257 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.518 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.518 "name": "raid_bdev1", 00:19:48.518 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:48.518 "strip_size_kb": 0, 00:19:48.518 "state": "online", 00:19:48.518 "raid_level": "raid1", 00:19:48.518 "superblock": true, 00:19:48.518 "num_base_bdevs": 4, 00:19:48.518 "num_base_bdevs_discovered": 3, 00:19:48.518 "num_base_bdevs_operational": 3, 00:19:48.518 "base_bdevs_list": [ 00:19:48.518 { 00:19:48.518 "name": null, 00:19:48.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.518 "is_configured": false, 00:19:48.518 "data_offset": 2048, 00:19:48.518 "data_size": 63488 00:19:48.518 }, 00:19:48.518 { 00:19:48.518 "name": "BaseBdev2", 00:19:48.518 "uuid": "14b2fe5c-59c1-5198-9046-09c14591ef86", 00:19:48.518 "is_configured": true, 00:19:48.518 "data_offset": 2048, 00:19:48.518 "data_size": 63488 00:19:48.518 }, 00:19:48.518 { 00:19:48.518 "name": "BaseBdev3", 00:19:48.518 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:48.518 "is_configured": true, 00:19:48.518 "data_offset": 2048, 00:19:48.518 "data_size": 63488 00:19:48.518 }, 00:19:48.518 { 00:19:48.518 "name": "BaseBdev4", 00:19:48.518 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:48.518 "is_configured": true, 00:19:48.518 "data_offset": 2048, 00:19:48.518 "data_size": 63488 00:19:48.518 } 00:19:48.518 ] 00:19:48.518 }' 00:19:48.518 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.518 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.088 06:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.349 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.349 "name": "raid_bdev1", 00:19:49.349 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:49.349 "strip_size_kb": 0, 00:19:49.349 "state": "online", 00:19:49.349 "raid_level": "raid1", 00:19:49.349 "superblock": true, 00:19:49.349 "num_base_bdevs": 4, 00:19:49.349 "num_base_bdevs_discovered": 3, 00:19:49.349 "num_base_bdevs_operational": 3, 00:19:49.349 "base_bdevs_list": [ 00:19:49.349 { 00:19:49.349 "name": null, 00:19:49.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.349 "is_configured": false, 00:19:49.349 "data_offset": 2048, 00:19:49.349 "data_size": 63488 00:19:49.349 }, 00:19:49.349 { 00:19:49.349 "name": "BaseBdev2", 00:19:49.349 "uuid": "14b2fe5c-59c1-5198-9046-09c14591ef86", 00:19:49.349 "is_configured": true, 00:19:49.349 "data_offset": 2048, 00:19:49.349 "data_size": 63488 00:19:49.349 }, 00:19:49.349 { 00:19:49.349 "name": "BaseBdev3", 00:19:49.349 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:49.349 "is_configured": true, 00:19:49.349 "data_offset": 2048, 00:19:49.349 "data_size": 63488 00:19:49.349 }, 00:19:49.349 { 00:19:49.349 "name": "BaseBdev4", 00:19:49.349 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:49.349 "is_configured": true, 00:19:49.349 "data_offset": 2048, 00:19:49.349 "data_size": 63488 00:19:49.349 } 00:19:49.349 ] 00:19:49.349 }' 00:19:49.349 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:49.349 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:49.349 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:49.349 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:49.349 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.609 [2024-08-13 06:14:51.267573] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.609 [2024-08-13 06:14:51.305929] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:19:49.609 [2024-08-13 06:14:51.307776] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.609 06:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:19:49.868 [2024-08-13 06:14:51.420698] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:49.869 [2024-08-13 06:14:51.421963] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:49.869 [2024-08-13 06:14:51.637998] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:49.869 [2024-08-13 06:14:51.638394] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:50.129 [2024-08-13 06:14:51.871615] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:50.129 [2024-08-13 06:14:51.872047] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:50.389 [2024-08-13 06:14:51.981132] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:50.648 [2024-08-13 06:14:52.212870] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.648 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.648 [2024-08-13 06:14:52.436665] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:50.908 "name": "raid_bdev1", 00:19:50.908 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:50.908 "strip_size_kb": 0, 00:19:50.908 "state": "online", 00:19:50.908 "raid_level": "raid1", 00:19:50.908 "superblock": true, 00:19:50.908 "num_base_bdevs": 4, 00:19:50.908 "num_base_bdevs_discovered": 4, 00:19:50.908 "num_base_bdevs_operational": 4, 00:19:50.908 "process": { 00:19:50.908 "type": "rebuild", 00:19:50.908 "target": "spare", 00:19:50.908 "progress": { 00:19:50.908 "blocks": 16384, 00:19:50.908 "percent": 25 00:19:50.908 } 00:19:50.908 }, 00:19:50.908 "base_bdevs_list": [ 00:19:50.908 { 00:19:50.908 "name": "spare", 00:19:50.908 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:50.908 "is_configured": true, 00:19:50.908 "data_offset": 2048, 00:19:50.908 "data_size": 63488 00:19:50.908 }, 00:19:50.908 { 00:19:50.908 "name": "BaseBdev2", 00:19:50.908 "uuid": "14b2fe5c-59c1-5198-9046-09c14591ef86", 00:19:50.908 "is_configured": true, 00:19:50.908 "data_offset": 2048, 00:19:50.908 "data_size": 63488 00:19:50.908 }, 00:19:50.908 { 00:19:50.908 "name": "BaseBdev3", 00:19:50.908 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:50.908 "is_configured": true, 00:19:50.908 "data_offset": 2048, 00:19:50.908 "data_size": 63488 00:19:50.908 }, 00:19:50.908 { 00:19:50.908 "name": "BaseBdev4", 00:19:50.908 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:50.908 "is_configured": true, 00:19:50.908 "data_offset": 2048, 00:19:50.908 "data_size": 63488 00:19:50.908 } 00:19:50.908 ] 00:19:50.908 }' 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:19:50.908 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:19:50.908 06:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:51.168 [2024-08-13 06:14:52.820545] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:51.168 [2024-08-13 06:14:52.897187] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:51.168 [2024-08-13 06:14:52.897360] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:51.429 [2024-08-13 06:14:53.104065] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:19:51.429 [2024-08-13 06:14:53.104093] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.429 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.689 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.689 "name": "raid_bdev1", 00:19:51.689 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:51.689 "strip_size_kb": 0, 00:19:51.689 "state": "online", 00:19:51.689 "raid_level": "raid1", 00:19:51.689 "superblock": true, 00:19:51.689 "num_base_bdevs": 4, 00:19:51.689 "num_base_bdevs_discovered": 3, 00:19:51.689 "num_base_bdevs_operational": 3, 00:19:51.689 "process": { 00:19:51.689 "type": "rebuild", 00:19:51.689 "target": "spare", 00:19:51.689 "progress": { 00:19:51.689 "blocks": 24576, 00:19:51.689 "percent": 38 00:19:51.689 } 00:19:51.689 }, 00:19:51.689 "base_bdevs_list": [ 00:19:51.689 { 00:19:51.689 "name": "spare", 00:19:51.689 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:51.689 "is_configured": true, 00:19:51.689 "data_offset": 2048, 00:19:51.689 "data_size": 63488 00:19:51.689 }, 00:19:51.689 { 00:19:51.689 "name": null, 00:19:51.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.690 "is_configured": false, 00:19:51.690 "data_offset": 2048, 00:19:51.690 "data_size": 63488 00:19:51.690 }, 00:19:51.690 { 00:19:51.690 "name": "BaseBdev3", 00:19:51.690 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:51.690 "is_configured": true, 00:19:51.690 "data_offset": 2048, 00:19:51.690 "data_size": 63488 00:19:51.690 }, 00:19:51.690 { 00:19:51.690 "name": "BaseBdev4", 00:19:51.690 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:51.690 "is_configured": true, 00:19:51.690 "data_offset": 2048, 00:19:51.690 "data_size": 63488 00:19:51.690 } 00:19:51.690 ] 00:19:51.690 }' 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=845 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.690 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.690 [2024-08-13 06:14:53.438636] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:51.964 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.964 "name": "raid_bdev1", 00:19:51.964 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:51.964 "strip_size_kb": 0, 00:19:51.964 "state": "online", 00:19:51.964 "raid_level": "raid1", 00:19:51.964 "superblock": true, 00:19:51.964 "num_base_bdevs": 4, 00:19:51.964 "num_base_bdevs_discovered": 3, 00:19:51.964 "num_base_bdevs_operational": 3, 00:19:51.964 "process": { 00:19:51.964 "type": "rebuild", 00:19:51.964 "target": "spare", 00:19:51.964 "progress": { 00:19:51.964 "blocks": 28672, 00:19:51.964 "percent": 45 00:19:51.964 } 00:19:51.964 }, 00:19:51.964 "base_bdevs_list": [ 00:19:51.964 { 00:19:51.964 "name": "spare", 00:19:51.964 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:51.964 "is_configured": true, 00:19:51.964 "data_offset": 2048, 00:19:51.964 "data_size": 63488 00:19:51.964 }, 00:19:51.964 { 00:19:51.964 "name": null, 00:19:51.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.964 "is_configured": false, 00:19:51.964 "data_offset": 2048, 00:19:51.964 "data_size": 63488 00:19:51.964 }, 00:19:51.965 { 00:19:51.965 "name": "BaseBdev3", 00:19:51.965 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:51.965 "is_configured": true, 00:19:51.965 "data_offset": 2048, 00:19:51.965 "data_size": 63488 00:19:51.965 }, 00:19:51.965 { 00:19:51.965 "name": "BaseBdev4", 00:19:51.965 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:51.965 "is_configured": true, 00:19:51.965 "data_offset": 2048, 00:19:51.965 "data_size": 63488 00:19:51.965 } 00:19:51.965 ] 00:19:51.965 }' 00:19:51.965 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:51.965 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.965 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:51.965 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.965 06:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:52.578 [2024-08-13 06:14:54.095067] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:52.837 [2024-08-13 06:14:54.518475] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:52.837 [2024-08-13 06:14:54.518988] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.097 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.357 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.357 "name": "raid_bdev1", 00:19:53.357 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:53.357 "strip_size_kb": 0, 00:19:53.357 "state": "online", 00:19:53.357 "raid_level": "raid1", 00:19:53.357 "superblock": true, 00:19:53.357 "num_base_bdevs": 4, 00:19:53.357 "num_base_bdevs_discovered": 3, 00:19:53.357 "num_base_bdevs_operational": 3, 00:19:53.357 "process": { 00:19:53.357 "type": "rebuild", 00:19:53.358 "target": "spare", 00:19:53.358 "progress": { 00:19:53.358 "blocks": 51200, 00:19:53.358 "percent": 80 00:19:53.358 } 00:19:53.358 }, 00:19:53.358 "base_bdevs_list": [ 00:19:53.358 { 00:19:53.358 "name": "spare", 00:19:53.358 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:53.358 "is_configured": true, 00:19:53.358 "data_offset": 2048, 00:19:53.358 "data_size": 63488 00:19:53.358 }, 00:19:53.358 { 00:19:53.358 "name": null, 00:19:53.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.358 "is_configured": false, 00:19:53.358 "data_offset": 2048, 00:19:53.358 "data_size": 63488 00:19:53.358 }, 00:19:53.358 { 00:19:53.358 "name": "BaseBdev3", 00:19:53.358 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:53.358 "is_configured": true, 00:19:53.358 "data_offset": 2048, 00:19:53.358 "data_size": 63488 00:19:53.358 }, 00:19:53.358 { 00:19:53.358 "name": "BaseBdev4", 00:19:53.358 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:53.358 "is_configured": true, 00:19:53.358 "data_offset": 2048, 00:19:53.358 "data_size": 63488 00:19:53.358 } 00:19:53.358 ] 00:19:53.358 }' 00:19:53.358 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:53.358 [2024-08-13 06:14:54.949457] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:53.358 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.358 06:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:53.358 06:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.358 06:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:19:53.618 [2024-08-13 06:14:55.274134] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:53.877 [2024-08-13 06:14:55.592629] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:54.137 [2024-08-13 06:14:55.697461] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:54.137 [2024-08-13 06:14:55.700990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.396 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.656 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:54.656 "name": "raid_bdev1", 00:19:54.656 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:54.656 "strip_size_kb": 0, 00:19:54.656 "state": "online", 00:19:54.656 "raid_level": "raid1", 00:19:54.656 "superblock": true, 00:19:54.657 "num_base_bdevs": 4, 00:19:54.657 "num_base_bdevs_discovered": 3, 00:19:54.657 "num_base_bdevs_operational": 3, 00:19:54.657 "base_bdevs_list": [ 00:19:54.657 { 00:19:54.657 "name": "spare", 00:19:54.657 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:54.657 "is_configured": true, 00:19:54.657 "data_offset": 2048, 00:19:54.657 "data_size": 63488 00:19:54.657 }, 00:19:54.657 { 00:19:54.657 "name": null, 00:19:54.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.657 "is_configured": false, 00:19:54.657 "data_offset": 2048, 00:19:54.657 "data_size": 63488 00:19:54.657 }, 00:19:54.657 { 00:19:54.657 "name": "BaseBdev3", 00:19:54.657 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:54.657 "is_configured": true, 00:19:54.657 "data_offset": 2048, 00:19:54.657 "data_size": 63488 00:19:54.657 }, 00:19:54.657 { 00:19:54.657 "name": "BaseBdev4", 00:19:54.657 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:54.657 "is_configured": true, 00:19:54.657 "data_offset": 2048, 00:19:54.657 "data_size": 63488 00:19:54.657 } 00:19:54.657 ] 00:19:54.657 }' 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.657 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:54.917 "name": "raid_bdev1", 00:19:54.917 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:54.917 "strip_size_kb": 0, 00:19:54.917 "state": "online", 00:19:54.917 "raid_level": "raid1", 00:19:54.917 "superblock": true, 00:19:54.917 "num_base_bdevs": 4, 00:19:54.917 "num_base_bdevs_discovered": 3, 00:19:54.917 "num_base_bdevs_operational": 3, 00:19:54.917 "base_bdevs_list": [ 00:19:54.917 { 00:19:54.917 "name": "spare", 00:19:54.917 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:54.917 "is_configured": true, 00:19:54.917 "data_offset": 2048, 00:19:54.917 "data_size": 63488 00:19:54.917 }, 00:19:54.917 { 00:19:54.917 "name": null, 00:19:54.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.917 "is_configured": false, 00:19:54.917 "data_offset": 2048, 00:19:54.917 "data_size": 63488 00:19:54.917 }, 00:19:54.917 { 00:19:54.917 "name": "BaseBdev3", 00:19:54.917 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:54.917 "is_configured": true, 00:19:54.917 "data_offset": 2048, 00:19:54.917 "data_size": 63488 00:19:54.917 }, 00:19:54.917 { 00:19:54.917 "name": "BaseBdev4", 00:19:54.917 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:54.917 "is_configured": true, 00:19:54.917 "data_offset": 2048, 00:19:54.917 "data_size": 63488 00:19:54.917 } 00:19:54.917 ] 00:19:54.917 }' 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.917 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.177 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.177 "name": "raid_bdev1", 00:19:55.177 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:55.177 "strip_size_kb": 0, 00:19:55.177 "state": "online", 00:19:55.177 "raid_level": "raid1", 00:19:55.177 "superblock": true, 00:19:55.177 "num_base_bdevs": 4, 00:19:55.177 "num_base_bdevs_discovered": 3, 00:19:55.177 "num_base_bdevs_operational": 3, 00:19:55.177 "base_bdevs_list": [ 00:19:55.177 { 00:19:55.177 "name": "spare", 00:19:55.177 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:55.177 "is_configured": true, 00:19:55.177 "data_offset": 2048, 00:19:55.177 "data_size": 63488 00:19:55.177 }, 00:19:55.177 { 00:19:55.177 "name": null, 00:19:55.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.177 "is_configured": false, 00:19:55.177 "data_offset": 2048, 00:19:55.177 "data_size": 63488 00:19:55.177 }, 00:19:55.177 { 00:19:55.177 "name": "BaseBdev3", 00:19:55.177 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:55.177 "is_configured": true, 00:19:55.177 "data_offset": 2048, 00:19:55.177 "data_size": 63488 00:19:55.177 }, 00:19:55.177 { 00:19:55.177 "name": "BaseBdev4", 00:19:55.177 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:55.177 "is_configured": true, 00:19:55.177 "data_offset": 2048, 00:19:55.177 "data_size": 63488 00:19:55.177 } 00:19:55.177 ] 00:19:55.177 }' 00:19:55.178 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.178 06:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.745 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:56.005 [2024-08-13 06:14:57.548742] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.005 [2024-08-13 06:14:57.548878] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.005 00:19:56.005 Latency(us) 00:19:56.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.005 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:56.005 raid_bdev1 : 10.41 108.42 325.26 0.00 0.00 12447.16 282.61 114473.36 00:19:56.005 =================================================================================================================== 00:19:56.005 Total : 108.42 325.26 0.00 0.00 12447.16 282.61 114473.36 00:19:56.005 [2024-08-13 06:14:57.627448] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.005 [2024-08-13 06:14:57.627520] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.005 [2024-08-13 06:14:57.627619] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.005 [2024-08-13 06:14:57.627675] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:19:56.005 0 00:19:56.005 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.005 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.265 06:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:56.525 /dev/nbd0 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.525 1+0 records in 00:19:56.525 1+0 records out 00:19:56.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521838 s, 7.8 MB/s 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.525 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:56.785 /dev/nbd1 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.785 1+0 records in 00:19:56.785 1+0 records out 00:19:56.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506832 s, 8.1 MB/s 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.785 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.045 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:57.045 /dev/nbd1 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.305 1+0 records in 00:19:57.305 1+0 records out 00:19:57.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380262 s, 10.8 MB/s 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.305 06:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:57.565 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.566 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:57.566 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.566 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:57.566 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.566 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:57.566 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:57.825 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:58.085 [2024-08-13 06:14:59.754890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:58.085 [2024-08-13 06:14:59.755234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.085 [2024-08-13 06:14:59.755347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:58.085 [2024-08-13 06:14:59.755403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.085 [2024-08-13 06:14:59.757434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.085 [2024-08-13 06:14:59.757553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:58.085 [2024-08-13 06:14:59.757680] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:58.085 [2024-08-13 06:14:59.757727] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.085 [2024-08-13 06:14:59.757841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:58.085 [2024-08-13 06:14:59.757949] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:58.085 spare 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.085 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.085 [2024-08-13 06:14:59.857834] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:19:58.085 [2024-08-13 06:14:59.857875] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:58.085 [2024-08-13 06:14:59.858160] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:19:58.085 [2024-08-13 06:14:59.858312] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:19:58.085 [2024-08-13 06:14:59.858340] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:19:58.085 [2024-08-13 06:14:59.858444] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.345 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.345 "name": "raid_bdev1", 00:19:58.345 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:58.345 "strip_size_kb": 0, 00:19:58.345 "state": "online", 00:19:58.345 "raid_level": "raid1", 00:19:58.345 "superblock": true, 00:19:58.345 "num_base_bdevs": 4, 00:19:58.345 "num_base_bdevs_discovered": 3, 00:19:58.345 "num_base_bdevs_operational": 3, 00:19:58.345 "base_bdevs_list": [ 00:19:58.345 { 00:19:58.345 "name": "spare", 00:19:58.345 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:58.345 "is_configured": true, 00:19:58.345 "data_offset": 2048, 00:19:58.345 "data_size": 63488 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "name": null, 00:19:58.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.345 "is_configured": false, 00:19:58.345 "data_offset": 2048, 00:19:58.345 "data_size": 63488 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "name": "BaseBdev3", 00:19:58.345 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:58.345 "is_configured": true, 00:19:58.345 "data_offset": 2048, 00:19:58.345 "data_size": 63488 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "name": "BaseBdev4", 00:19:58.345 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:58.345 "is_configured": true, 00:19:58.345 "data_offset": 2048, 00:19:58.345 "data_size": 63488 00:19:58.345 } 00:19:58.345 ] 00:19:58.345 }' 00:19:58.345 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.345 06:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.914 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.173 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.173 "name": "raid_bdev1", 00:19:59.173 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:59.173 "strip_size_kb": 0, 00:19:59.173 "state": "online", 00:19:59.173 "raid_level": "raid1", 00:19:59.173 "superblock": true, 00:19:59.173 "num_base_bdevs": 4, 00:19:59.173 "num_base_bdevs_discovered": 3, 00:19:59.173 "num_base_bdevs_operational": 3, 00:19:59.173 "base_bdevs_list": [ 00:19:59.173 { 00:19:59.173 "name": "spare", 00:19:59.173 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:19:59.173 "is_configured": true, 00:19:59.173 "data_offset": 2048, 00:19:59.174 "data_size": 63488 00:19:59.174 }, 00:19:59.174 { 00:19:59.174 "name": null, 00:19:59.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.174 "is_configured": false, 00:19:59.174 "data_offset": 2048, 00:19:59.174 "data_size": 63488 00:19:59.174 }, 00:19:59.174 { 00:19:59.174 "name": "BaseBdev3", 00:19:59.174 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:59.174 "is_configured": true, 00:19:59.174 "data_offset": 2048, 00:19:59.174 "data_size": 63488 00:19:59.174 }, 00:19:59.174 { 00:19:59.174 "name": "BaseBdev4", 00:19:59.174 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:59.174 "is_configured": true, 00:19:59.174 "data_offset": 2048, 00:19:59.174 "data_size": 63488 00:19:59.174 } 00:19:59.174 ] 00:19:59.174 }' 00:19:59.174 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:59.174 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:59.174 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:59.174 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:59.174 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.174 06:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:59.433 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.433 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:59.692 [2024-08-13 06:15:01.226146] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.692 "name": "raid_bdev1", 00:19:59.692 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:19:59.692 "strip_size_kb": 0, 00:19:59.692 "state": "online", 00:19:59.692 "raid_level": "raid1", 00:19:59.692 "superblock": true, 00:19:59.692 "num_base_bdevs": 4, 00:19:59.692 "num_base_bdevs_discovered": 2, 00:19:59.692 "num_base_bdevs_operational": 2, 00:19:59.692 "base_bdevs_list": [ 00:19:59.692 { 00:19:59.692 "name": null, 00:19:59.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.692 "is_configured": false, 00:19:59.692 "data_offset": 2048, 00:19:59.692 "data_size": 63488 00:19:59.692 }, 00:19:59.692 { 00:19:59.692 "name": null, 00:19:59.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.692 "is_configured": false, 00:19:59.692 "data_offset": 2048, 00:19:59.692 "data_size": 63488 00:19:59.692 }, 00:19:59.692 { 00:19:59.692 "name": "BaseBdev3", 00:19:59.692 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:19:59.692 "is_configured": true, 00:19:59.692 "data_offset": 2048, 00:19:59.692 "data_size": 63488 00:19:59.692 }, 00:19:59.692 { 00:19:59.692 "name": "BaseBdev4", 00:19:59.692 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:19:59.692 "is_configured": true, 00:19:59.692 "data_offset": 2048, 00:19:59.692 "data_size": 63488 00:19:59.692 } 00:19:59.692 ] 00:19:59.692 }' 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.692 06:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.260 06:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:00.518 [2024-08-13 06:15:02.222148] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.518 [2024-08-13 06:15:02.222369] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:00.518 [2024-08-13 06:15:02.222429] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:00.518 [2024-08-13 06:15:02.222895] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.518 [2024-08-13 06:15:02.226529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:20:00.518 [2024-08-13 06:15:02.228309] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.518 06:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:20:01.456 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.456 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:01.456 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:01.456 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:01.456 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:01.716 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.716 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.716 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.716 "name": "raid_bdev1", 00:20:01.716 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:01.716 "strip_size_kb": 0, 00:20:01.716 "state": "online", 00:20:01.716 "raid_level": "raid1", 00:20:01.716 "superblock": true, 00:20:01.716 "num_base_bdevs": 4, 00:20:01.716 "num_base_bdevs_discovered": 3, 00:20:01.716 "num_base_bdevs_operational": 3, 00:20:01.716 "process": { 00:20:01.716 "type": "rebuild", 00:20:01.716 "target": "spare", 00:20:01.716 "progress": { 00:20:01.716 "blocks": 24576, 00:20:01.716 "percent": 38 00:20:01.716 } 00:20:01.716 }, 00:20:01.716 "base_bdevs_list": [ 00:20:01.716 { 00:20:01.716 "name": "spare", 00:20:01.716 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:20:01.716 "is_configured": true, 00:20:01.716 "data_offset": 2048, 00:20:01.716 "data_size": 63488 00:20:01.716 }, 00:20:01.716 { 00:20:01.716 "name": null, 00:20:01.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.716 "is_configured": false, 00:20:01.716 "data_offset": 2048, 00:20:01.716 "data_size": 63488 00:20:01.716 }, 00:20:01.716 { 00:20:01.716 "name": "BaseBdev3", 00:20:01.716 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:01.716 "is_configured": true, 00:20:01.716 "data_offset": 2048, 00:20:01.716 "data_size": 63488 00:20:01.716 }, 00:20:01.716 { 00:20:01.716 "name": "BaseBdev4", 00:20:01.716 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:01.716 "is_configured": true, 00:20:01.716 "data_offset": 2048, 00:20:01.716 "data_size": 63488 00:20:01.716 } 00:20:01.716 ] 00:20:01.716 }' 00:20:01.716 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:01.716 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.716 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:01.976 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.976 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:01.976 [2024-08-13 06:15:03.718243] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.976 [2024-08-13 06:15:03.733428] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:01.976 [2024-08-13 06:15:03.733810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.976 [2024-08-13 06:15:03.733839] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.976 [2024-08-13 06:15:03.733850] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.236 "name": "raid_bdev1", 00:20:02.236 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:02.236 "strip_size_kb": 0, 00:20:02.236 "state": "online", 00:20:02.236 "raid_level": "raid1", 00:20:02.236 "superblock": true, 00:20:02.236 "num_base_bdevs": 4, 00:20:02.236 "num_base_bdevs_discovered": 2, 00:20:02.236 "num_base_bdevs_operational": 2, 00:20:02.236 "base_bdevs_list": [ 00:20:02.236 { 00:20:02.236 "name": null, 00:20:02.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.236 "is_configured": false, 00:20:02.236 "data_offset": 2048, 00:20:02.236 "data_size": 63488 00:20:02.236 }, 00:20:02.236 { 00:20:02.236 "name": null, 00:20:02.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.236 "is_configured": false, 00:20:02.236 "data_offset": 2048, 00:20:02.236 "data_size": 63488 00:20:02.236 }, 00:20:02.236 { 00:20:02.236 "name": "BaseBdev3", 00:20:02.236 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:02.236 "is_configured": true, 00:20:02.236 "data_offset": 2048, 00:20:02.236 "data_size": 63488 00:20:02.236 }, 00:20:02.236 { 00:20:02.236 "name": "BaseBdev4", 00:20:02.236 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:02.236 "is_configured": true, 00:20:02.236 "data_offset": 2048, 00:20:02.236 "data_size": 63488 00:20:02.236 } 00:20:02.236 ] 00:20:02.236 }' 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.236 06:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.804 06:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:03.063 [2024-08-13 06:15:04.680403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:03.063 [2024-08-13 06:15:04.680658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.063 [2024-08-13 06:15:04.680730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:03.063 [2024-08-13 06:15:04.680774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.063 [2024-08-13 06:15:04.681253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.063 [2024-08-13 06:15:04.681362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:03.063 [2024-08-13 06:15:04.681493] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:03.063 [2024-08-13 06:15:04.681514] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:03.064 [2024-08-13 06:15:04.681525] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:03.064 [2024-08-13 06:15:04.681608] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.064 [2024-08-13 06:15:04.685239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:20:03.064 spare 00:20:03.064 [2024-08-13 06:15:04.686981] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.064 06:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:20:04.000 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.001 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:04.001 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:04.001 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:04.001 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:04.001 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.001 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.260 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.260 "name": "raid_bdev1", 00:20:04.260 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:04.260 "strip_size_kb": 0, 00:20:04.260 "state": "online", 00:20:04.260 "raid_level": "raid1", 00:20:04.260 "superblock": true, 00:20:04.260 "num_base_bdevs": 4, 00:20:04.260 "num_base_bdevs_discovered": 3, 00:20:04.260 "num_base_bdevs_operational": 3, 00:20:04.260 "process": { 00:20:04.260 "type": "rebuild", 00:20:04.260 "target": "spare", 00:20:04.260 "progress": { 00:20:04.260 "blocks": 22528, 00:20:04.260 "percent": 35 00:20:04.260 } 00:20:04.260 }, 00:20:04.260 "base_bdevs_list": [ 00:20:04.260 { 00:20:04.260 "name": "spare", 00:20:04.260 "uuid": "0c640aa2-ead3-5414-ab88-f4e2e2178cd1", 00:20:04.260 "is_configured": true, 00:20:04.260 "data_offset": 2048, 00:20:04.260 "data_size": 63488 00:20:04.260 }, 00:20:04.260 { 00:20:04.260 "name": null, 00:20:04.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.260 "is_configured": false, 00:20:04.260 "data_offset": 2048, 00:20:04.260 "data_size": 63488 00:20:04.260 }, 00:20:04.260 { 00:20:04.260 "name": "BaseBdev3", 00:20:04.260 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:04.260 "is_configured": true, 00:20:04.260 "data_offset": 2048, 00:20:04.260 "data_size": 63488 00:20:04.260 }, 00:20:04.260 { 00:20:04.260 "name": "BaseBdev4", 00:20:04.260 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:04.260 "is_configured": true, 00:20:04.260 "data_offset": 2048, 00:20:04.260 "data_size": 63488 00:20:04.260 } 00:20:04.260 ] 00:20:04.260 }' 00:20:04.260 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:04.260 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.260 06:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:04.260 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.260 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:04.519 [2024-08-13 06:15:06.166930] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.519 [2024-08-13 06:15:06.192115] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:04.519 [2024-08-13 06:15:06.192476] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.519 [2024-08-13 06:15:06.192508] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.519 [2024-08-13 06:15:06.192517] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.519 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.778 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.778 "name": "raid_bdev1", 00:20:04.778 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:04.778 "strip_size_kb": 0, 00:20:04.778 "state": "online", 00:20:04.778 "raid_level": "raid1", 00:20:04.778 "superblock": true, 00:20:04.778 "num_base_bdevs": 4, 00:20:04.778 "num_base_bdevs_discovered": 2, 00:20:04.778 "num_base_bdevs_operational": 2, 00:20:04.778 "base_bdevs_list": [ 00:20:04.778 { 00:20:04.778 "name": null, 00:20:04.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.778 "is_configured": false, 00:20:04.778 "data_offset": 2048, 00:20:04.778 "data_size": 63488 00:20:04.778 }, 00:20:04.778 { 00:20:04.778 "name": null, 00:20:04.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.778 "is_configured": false, 00:20:04.778 "data_offset": 2048, 00:20:04.778 "data_size": 63488 00:20:04.778 }, 00:20:04.778 { 00:20:04.778 "name": "BaseBdev3", 00:20:04.778 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:04.778 "is_configured": true, 00:20:04.778 "data_offset": 2048, 00:20:04.778 "data_size": 63488 00:20:04.778 }, 00:20:04.778 { 00:20:04.778 "name": "BaseBdev4", 00:20:04.778 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:04.778 "is_configured": true, 00:20:04.778 "data_offset": 2048, 00:20:04.778 "data_size": 63488 00:20:04.778 } 00:20:04.778 ] 00:20:04.778 }' 00:20:04.779 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.779 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.347 06:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.606 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:05.606 "name": "raid_bdev1", 00:20:05.606 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:05.606 "strip_size_kb": 0, 00:20:05.606 "state": "online", 00:20:05.606 "raid_level": "raid1", 00:20:05.606 "superblock": true, 00:20:05.606 "num_base_bdevs": 4, 00:20:05.606 "num_base_bdevs_discovered": 2, 00:20:05.606 "num_base_bdevs_operational": 2, 00:20:05.606 "base_bdevs_list": [ 00:20:05.606 { 00:20:05.606 "name": null, 00:20:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.606 "is_configured": false, 00:20:05.606 "data_offset": 2048, 00:20:05.606 "data_size": 63488 00:20:05.606 }, 00:20:05.606 { 00:20:05.606 "name": null, 00:20:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.606 "is_configured": false, 00:20:05.606 "data_offset": 2048, 00:20:05.606 "data_size": 63488 00:20:05.606 }, 00:20:05.606 { 00:20:05.606 "name": "BaseBdev3", 00:20:05.606 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:05.606 "is_configured": true, 00:20:05.606 "data_offset": 2048, 00:20:05.606 "data_size": 63488 00:20:05.606 }, 00:20:05.606 { 00:20:05.606 "name": "BaseBdev4", 00:20:05.606 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:05.606 "is_configured": true, 00:20:05.606 "data_offset": 2048, 00:20:05.606 "data_size": 63488 00:20:05.606 } 00:20:05.606 ] 00:20:05.606 }' 00:20:05.606 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:05.606 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:05.606 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:05.606 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:05.606 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:05.865 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:05.865 [2024-08-13 06:15:07.610178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:05.865 [2024-08-13 06:15:07.610610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.865 [2024-08-13 06:15:07.610697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:05.865 [2024-08-13 06:15:07.610741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.865 [2024-08-13 06:15:07.611154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.865 [2024-08-13 06:15:07.611181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:05.865 [2024-08-13 06:15:07.611261] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:05.865 [2024-08-13 06:15:07.611283] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:05.865 [2024-08-13 06:15:07.611296] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:05.865 BaseBdev1 00:20:05.865 06:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.244 "name": "raid_bdev1", 00:20:07.244 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:07.244 "strip_size_kb": 0, 00:20:07.244 "state": "online", 00:20:07.244 "raid_level": "raid1", 00:20:07.244 "superblock": true, 00:20:07.244 "num_base_bdevs": 4, 00:20:07.244 "num_base_bdevs_discovered": 2, 00:20:07.244 "num_base_bdevs_operational": 2, 00:20:07.244 "base_bdevs_list": [ 00:20:07.244 { 00:20:07.244 "name": null, 00:20:07.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.244 "is_configured": false, 00:20:07.244 "data_offset": 2048, 00:20:07.244 "data_size": 63488 00:20:07.244 }, 00:20:07.244 { 00:20:07.244 "name": null, 00:20:07.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.244 "is_configured": false, 00:20:07.244 "data_offset": 2048, 00:20:07.244 "data_size": 63488 00:20:07.244 }, 00:20:07.244 { 00:20:07.244 "name": "BaseBdev3", 00:20:07.244 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:07.244 "is_configured": true, 00:20:07.244 "data_offset": 2048, 00:20:07.244 "data_size": 63488 00:20:07.244 }, 00:20:07.244 { 00:20:07.244 "name": "BaseBdev4", 00:20:07.244 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:07.244 "is_configured": true, 00:20:07.244 "data_offset": 2048, 00:20:07.244 "data_size": 63488 00:20:07.244 } 00:20:07.244 ] 00:20:07.244 }' 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.244 06:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.812 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.071 "name": "raid_bdev1", 00:20:08.071 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:08.071 "strip_size_kb": 0, 00:20:08.071 "state": "online", 00:20:08.071 "raid_level": "raid1", 00:20:08.071 "superblock": true, 00:20:08.071 "num_base_bdevs": 4, 00:20:08.071 "num_base_bdevs_discovered": 2, 00:20:08.071 "num_base_bdevs_operational": 2, 00:20:08.071 "base_bdevs_list": [ 00:20:08.071 { 00:20:08.071 "name": null, 00:20:08.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.071 "is_configured": false, 00:20:08.071 "data_offset": 2048, 00:20:08.071 "data_size": 63488 00:20:08.071 }, 00:20:08.071 { 00:20:08.071 "name": null, 00:20:08.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.071 "is_configured": false, 00:20:08.071 "data_offset": 2048, 00:20:08.071 "data_size": 63488 00:20:08.071 }, 00:20:08.071 { 00:20:08.071 "name": "BaseBdev3", 00:20:08.071 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:08.071 "is_configured": true, 00:20:08.071 "data_offset": 2048, 00:20:08.071 "data_size": 63488 00:20:08.071 }, 00:20:08.071 { 00:20:08.071 "name": "BaseBdev4", 00:20:08.071 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:08.071 "is_configured": true, 00:20:08.071 "data_offset": 2048, 00:20:08.071 "data_size": 63488 00:20:08.071 } 00:20:08.071 ] 00:20:08.071 }' 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@646 -- # local es=0 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:08.071 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:08.329 [2024-08-13 06:15:09.942170] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.329 [2024-08-13 06:15:09.942314] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:08.329 [2024-08-13 06:15:09.942326] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:08.329 request: 00:20:08.329 { 00:20:08.329 "base_bdev": "BaseBdev1", 00:20:08.329 "raid_bdev": "raid_bdev1", 00:20:08.329 "method": "bdev_raid_add_base_bdev", 00:20:08.329 "req_id": 1 00:20:08.329 } 00:20:08.329 Got JSON-RPC error response 00:20:08.329 response: 00:20:08.329 { 00:20:08.329 "code": -22, 00:20:08.329 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:08.329 } 00:20:08.329 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # es=1 00:20:08.329 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:20:08.329 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:20:08.329 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:20:08.329 06:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.265 06:15:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.524 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.524 "name": "raid_bdev1", 00:20:09.524 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:09.524 "strip_size_kb": 0, 00:20:09.524 "state": "online", 00:20:09.524 "raid_level": "raid1", 00:20:09.524 "superblock": true, 00:20:09.524 "num_base_bdevs": 4, 00:20:09.524 "num_base_bdevs_discovered": 2, 00:20:09.524 "num_base_bdevs_operational": 2, 00:20:09.524 "base_bdevs_list": [ 00:20:09.524 { 00:20:09.524 "name": null, 00:20:09.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.524 "is_configured": false, 00:20:09.524 "data_offset": 2048, 00:20:09.524 "data_size": 63488 00:20:09.524 }, 00:20:09.524 { 00:20:09.524 "name": null, 00:20:09.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.524 "is_configured": false, 00:20:09.524 "data_offset": 2048, 00:20:09.524 "data_size": 63488 00:20:09.524 }, 00:20:09.524 { 00:20:09.524 "name": "BaseBdev3", 00:20:09.524 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:09.524 "is_configured": true, 00:20:09.524 "data_offset": 2048, 00:20:09.524 "data_size": 63488 00:20:09.524 }, 00:20:09.524 { 00:20:09.524 "name": "BaseBdev4", 00:20:09.524 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:09.524 "is_configured": true, 00:20:09.524 "data_offset": 2048, 00:20:09.524 "data_size": 63488 00:20:09.524 } 00:20:09.524 ] 00:20:09.524 }' 00:20:09.524 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.524 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.091 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.351 "name": "raid_bdev1", 00:20:10.351 "uuid": "1e0f52ef-aafa-4410-a7be-3e97e6e23f5b", 00:20:10.351 "strip_size_kb": 0, 00:20:10.351 "state": "online", 00:20:10.351 "raid_level": "raid1", 00:20:10.351 "superblock": true, 00:20:10.351 "num_base_bdevs": 4, 00:20:10.351 "num_base_bdevs_discovered": 2, 00:20:10.351 "num_base_bdevs_operational": 2, 00:20:10.351 "base_bdevs_list": [ 00:20:10.351 { 00:20:10.351 "name": null, 00:20:10.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.351 "is_configured": false, 00:20:10.351 "data_offset": 2048, 00:20:10.351 "data_size": 63488 00:20:10.351 }, 00:20:10.351 { 00:20:10.351 "name": null, 00:20:10.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.351 "is_configured": false, 00:20:10.351 "data_offset": 2048, 00:20:10.351 "data_size": 63488 00:20:10.351 }, 00:20:10.351 { 00:20:10.351 "name": "BaseBdev3", 00:20:10.351 "uuid": "79b19ef9-215b-5dcf-9ea7-a13303166cf2", 00:20:10.351 "is_configured": true, 00:20:10.351 "data_offset": 2048, 00:20:10.351 "data_size": 63488 00:20:10.351 }, 00:20:10.351 { 00:20:10.351 "name": "BaseBdev4", 00:20:10.351 "uuid": "77004d82-bd4e-54ff-ad4c-f8320fcfddac", 00:20:10.351 "is_configured": true, 00:20:10.351 "data_offset": 2048, 00:20:10.351 "data_size": 63488 00:20:10.351 } 00:20:10.351 ] 00:20:10.351 }' 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 96402 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 96402 ']' 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 96402 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:20:10.351 06:15:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:10.351 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96402 00:20:10.351 killing process with pid 96402 00:20:10.351 Received shutdown signal, test time was about 24.846114 seconds 00:20:10.351 00:20:10.351 Latency(us) 00:20:10.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.351 =================================================================================================================== 00:20:10.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.351 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:10.351 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:10.351 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96402' 00:20:10.351 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 96402 00:20:10.351 [2024-08-13 06:15:12.029747] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:10.351 [2024-08-13 06:15:12.029865] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.351 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 96402 00:20:10.351 [2024-08-13 06:15:12.029928] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.351 [2024-08-13 06:15:12.029938] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:20:10.351 [2024-08-13 06:15:12.074185] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:10.611 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:20:10.611 00:20:10.611 real 0m29.658s 00:20:10.611 user 0m46.453s 00:20:10.611 sys 0m4.159s 00:20:10.611 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:10.611 06:15:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.611 ************************************ 00:20:10.611 END TEST raid_rebuild_test_sb_io 00:20:10.611 ************************************ 00:20:10.611 06:15:12 bdev_raid -- bdev/bdev_raid.sh@964 -- # for n in {3..4} 00:20:10.611 06:15:12 bdev_raid -- bdev/bdev_raid.sh@965 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:20:10.611 06:15:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:10.611 06:15:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:10.611 06:15:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:10.611 ************************************ 00:20:10.611 START TEST raid5f_state_function_test 00:20:10.611 ************************************ 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 false 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:10.611 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=97228 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:10.870 Process raid pid: 97228 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 97228' 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 97228 /var/tmp/spdk-raid.sock 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 97228 ']' 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:10.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:10.870 06:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.870 [2024-08-13 06:15:12.502986] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:20:10.870 [2024-08-13 06:15:12.503178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.870 [2024-08-13 06:15:12.652879] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.129 [2024-08-13 06:15:12.698101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.129 [2024-08-13 06:15:12.740438] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:11.129 [2024-08-13 06:15:12.740476] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:11.696 06:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:11.696 06:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:20:11.696 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:11.955 [2024-08-13 06:15:13.512158] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.955 [2024-08-13 06:15:13.512214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.955 [2024-08-13 06:15:13.512225] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.955 [2024-08-13 06:15:13.512232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.955 [2024-08-13 06:15:13.512241] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.955 [2024-08-13 06:15:13.512247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.955 "name": "Existed_Raid", 00:20:11.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.955 "strip_size_kb": 64, 00:20:11.955 "state": "configuring", 00:20:11.955 "raid_level": "raid5f", 00:20:11.955 "superblock": false, 00:20:11.955 "num_base_bdevs": 3, 00:20:11.955 "num_base_bdevs_discovered": 0, 00:20:11.955 "num_base_bdevs_operational": 3, 00:20:11.955 "base_bdevs_list": [ 00:20:11.955 { 00:20:11.955 "name": "BaseBdev1", 00:20:11.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.955 "is_configured": false, 00:20:11.955 "data_offset": 0, 00:20:11.955 "data_size": 0 00:20:11.955 }, 00:20:11.955 { 00:20:11.955 "name": "BaseBdev2", 00:20:11.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.955 "is_configured": false, 00:20:11.955 "data_offset": 0, 00:20:11.955 "data_size": 0 00:20:11.955 }, 00:20:11.955 { 00:20:11.955 "name": "BaseBdev3", 00:20:11.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.955 "is_configured": false, 00:20:11.955 "data_offset": 0, 00:20:11.955 "data_size": 0 00:20:11.955 } 00:20:11.955 ] 00:20:11.955 }' 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.955 06:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.523 06:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:12.782 [2024-08-13 06:15:14.426557] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:12.782 [2024-08-13 06:15:14.426592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:20:12.782 06:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:13.040 [2024-08-13 06:15:14.646198] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.040 [2024-08-13 06:15:14.646235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.040 [2024-08-13 06:15:14.646245] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.040 [2024-08-13 06:15:14.646252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.040 [2024-08-13 06:15:14.646259] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:13.040 [2024-08-13 06:15:14.646266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:13.040 06:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:13.299 [2024-08-13 06:15:14.850793] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.299 BaseBdev1 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:13.299 06:15:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:13.558 [ 00:20:13.558 { 00:20:13.558 "name": "BaseBdev1", 00:20:13.558 "aliases": [ 00:20:13.558 "33b27a47-3398-4f0c-b2d3-e4433c795248" 00:20:13.558 ], 00:20:13.558 "product_name": "Malloc disk", 00:20:13.558 "block_size": 512, 00:20:13.558 "num_blocks": 65536, 00:20:13.558 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:13.558 "assigned_rate_limits": { 00:20:13.558 "rw_ios_per_sec": 0, 00:20:13.558 "rw_mbytes_per_sec": 0, 00:20:13.558 "r_mbytes_per_sec": 0, 00:20:13.558 "w_mbytes_per_sec": 0 00:20:13.558 }, 00:20:13.558 "claimed": true, 00:20:13.558 "claim_type": "exclusive_write", 00:20:13.558 "zoned": false, 00:20:13.558 "supported_io_types": { 00:20:13.558 "read": true, 00:20:13.558 "write": true, 00:20:13.558 "unmap": true, 00:20:13.558 "flush": true, 00:20:13.558 "reset": true, 00:20:13.558 "nvme_admin": false, 00:20:13.558 "nvme_io": false, 00:20:13.558 "nvme_io_md": false, 00:20:13.558 "write_zeroes": true, 00:20:13.558 "zcopy": true, 00:20:13.558 "get_zone_info": false, 00:20:13.558 "zone_management": false, 00:20:13.558 "zone_append": false, 00:20:13.558 "compare": false, 00:20:13.558 "compare_and_write": false, 00:20:13.558 "abort": true, 00:20:13.558 "seek_hole": false, 00:20:13.558 "seek_data": false, 00:20:13.558 "copy": true, 00:20:13.558 "nvme_iov_md": false 00:20:13.558 }, 00:20:13.558 "memory_domains": [ 00:20:13.558 { 00:20:13.558 "dma_device_id": "system", 00:20:13.558 "dma_device_type": 1 00:20:13.558 }, 00:20:13.558 { 00:20:13.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.558 "dma_device_type": 2 00:20:13.558 } 00:20:13.558 ], 00:20:13.558 "driver_specific": {} 00:20:13.558 } 00:20:13.558 ] 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.558 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.821 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.821 "name": "Existed_Raid", 00:20:13.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.821 "strip_size_kb": 64, 00:20:13.821 "state": "configuring", 00:20:13.821 "raid_level": "raid5f", 00:20:13.821 "superblock": false, 00:20:13.821 "num_base_bdevs": 3, 00:20:13.821 "num_base_bdevs_discovered": 1, 00:20:13.821 "num_base_bdevs_operational": 3, 00:20:13.821 "base_bdevs_list": [ 00:20:13.821 { 00:20:13.821 "name": "BaseBdev1", 00:20:13.821 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:13.821 "is_configured": true, 00:20:13.821 "data_offset": 0, 00:20:13.821 "data_size": 65536 00:20:13.821 }, 00:20:13.821 { 00:20:13.821 "name": "BaseBdev2", 00:20:13.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.821 "is_configured": false, 00:20:13.821 "data_offset": 0, 00:20:13.821 "data_size": 0 00:20:13.821 }, 00:20:13.821 { 00:20:13.821 "name": "BaseBdev3", 00:20:13.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.821 "is_configured": false, 00:20:13.821 "data_offset": 0, 00:20:13.821 "data_size": 0 00:20:13.821 } 00:20:13.821 ] 00:20:13.821 }' 00:20:13.821 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.821 06:15:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.422 06:15:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:14.422 [2024-08-13 06:15:16.120911] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:14.422 [2024-08-13 06:15:16.120958] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:20:14.422 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:14.701 [2024-08-13 06:15:16.288670] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.701 [2024-08-13 06:15:16.290364] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:14.701 [2024-08-13 06:15:16.290405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:14.701 [2024-08-13 06:15:16.290416] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:14.701 [2024-08-13 06:15:16.290423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.701 "name": "Existed_Raid", 00:20:14.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.701 "strip_size_kb": 64, 00:20:14.701 "state": "configuring", 00:20:14.701 "raid_level": "raid5f", 00:20:14.701 "superblock": false, 00:20:14.701 "num_base_bdevs": 3, 00:20:14.701 "num_base_bdevs_discovered": 1, 00:20:14.701 "num_base_bdevs_operational": 3, 00:20:14.701 "base_bdevs_list": [ 00:20:14.701 { 00:20:14.701 "name": "BaseBdev1", 00:20:14.701 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:14.701 "is_configured": true, 00:20:14.701 "data_offset": 0, 00:20:14.701 "data_size": 65536 00:20:14.701 }, 00:20:14.701 { 00:20:14.701 "name": "BaseBdev2", 00:20:14.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.701 "is_configured": false, 00:20:14.701 "data_offset": 0, 00:20:14.701 "data_size": 0 00:20:14.701 }, 00:20:14.701 { 00:20:14.701 "name": "BaseBdev3", 00:20:14.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.701 "is_configured": false, 00:20:14.701 "data_offset": 0, 00:20:14.701 "data_size": 0 00:20:14.701 } 00:20:14.701 ] 00:20:14.701 }' 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.701 06:15:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.269 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:15.528 [2024-08-13 06:15:17.207562] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.528 BaseBdev2 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:15.528 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.787 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:16.047 [ 00:20:16.047 { 00:20:16.047 "name": "BaseBdev2", 00:20:16.047 "aliases": [ 00:20:16.047 "1015b70a-f099-4fde-8289-2c7f41b3ef4e" 00:20:16.047 ], 00:20:16.047 "product_name": "Malloc disk", 00:20:16.047 "block_size": 512, 00:20:16.047 "num_blocks": 65536, 00:20:16.047 "uuid": "1015b70a-f099-4fde-8289-2c7f41b3ef4e", 00:20:16.047 "assigned_rate_limits": { 00:20:16.047 "rw_ios_per_sec": 0, 00:20:16.047 "rw_mbytes_per_sec": 0, 00:20:16.047 "r_mbytes_per_sec": 0, 00:20:16.047 "w_mbytes_per_sec": 0 00:20:16.047 }, 00:20:16.047 "claimed": true, 00:20:16.047 "claim_type": "exclusive_write", 00:20:16.047 "zoned": false, 00:20:16.047 "supported_io_types": { 00:20:16.047 "read": true, 00:20:16.047 "write": true, 00:20:16.047 "unmap": true, 00:20:16.047 "flush": true, 00:20:16.047 "reset": true, 00:20:16.047 "nvme_admin": false, 00:20:16.047 "nvme_io": false, 00:20:16.047 "nvme_io_md": false, 00:20:16.047 "write_zeroes": true, 00:20:16.047 "zcopy": true, 00:20:16.047 "get_zone_info": false, 00:20:16.047 "zone_management": false, 00:20:16.047 "zone_append": false, 00:20:16.047 "compare": false, 00:20:16.047 "compare_and_write": false, 00:20:16.047 "abort": true, 00:20:16.047 "seek_hole": false, 00:20:16.047 "seek_data": false, 00:20:16.047 "copy": true, 00:20:16.047 "nvme_iov_md": false 00:20:16.047 }, 00:20:16.047 "memory_domains": [ 00:20:16.047 { 00:20:16.047 "dma_device_id": "system", 00:20:16.047 "dma_device_type": 1 00:20:16.047 }, 00:20:16.047 { 00:20:16.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.047 "dma_device_type": 2 00:20:16.047 } 00:20:16.047 ], 00:20:16.047 "driver_specific": {} 00:20:16.047 } 00:20:16.047 ] 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.047 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.306 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.306 "name": "Existed_Raid", 00:20:16.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.306 "strip_size_kb": 64, 00:20:16.306 "state": "configuring", 00:20:16.306 "raid_level": "raid5f", 00:20:16.306 "superblock": false, 00:20:16.306 "num_base_bdevs": 3, 00:20:16.306 "num_base_bdevs_discovered": 2, 00:20:16.306 "num_base_bdevs_operational": 3, 00:20:16.306 "base_bdevs_list": [ 00:20:16.306 { 00:20:16.306 "name": "BaseBdev1", 00:20:16.306 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:16.306 "is_configured": true, 00:20:16.306 "data_offset": 0, 00:20:16.306 "data_size": 65536 00:20:16.306 }, 00:20:16.306 { 00:20:16.307 "name": "BaseBdev2", 00:20:16.307 "uuid": "1015b70a-f099-4fde-8289-2c7f41b3ef4e", 00:20:16.307 "is_configured": true, 00:20:16.307 "data_offset": 0, 00:20:16.307 "data_size": 65536 00:20:16.307 }, 00:20:16.307 { 00:20:16.307 "name": "BaseBdev3", 00:20:16.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.307 "is_configured": false, 00:20:16.307 "data_offset": 0, 00:20:16.307 "data_size": 0 00:20:16.307 } 00:20:16.307 ] 00:20:16.307 }' 00:20:16.307 06:15:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.307 06:15:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:16.875 [2024-08-13 06:15:18.624619] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.875 [2024-08-13 06:15:18.624768] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:20:16.875 [2024-08-13 06:15:18.624782] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:16.875 [2024-08-13 06:15:18.625076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:16.875 [2024-08-13 06:15:18.625483] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:20:16.875 [2024-08-13 06:15:18.625498] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:20:16.875 [2024-08-13 06:15:18.625684] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.875 BaseBdev3 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:16.875 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:17.135 06:15:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:17.394 [ 00:20:17.394 { 00:20:17.394 "name": "BaseBdev3", 00:20:17.394 "aliases": [ 00:20:17.394 "6549e683-c069-404e-9c0f-268adc5968db" 00:20:17.394 ], 00:20:17.394 "product_name": "Malloc disk", 00:20:17.394 "block_size": 512, 00:20:17.394 "num_blocks": 65536, 00:20:17.394 "uuid": "6549e683-c069-404e-9c0f-268adc5968db", 00:20:17.394 "assigned_rate_limits": { 00:20:17.394 "rw_ios_per_sec": 0, 00:20:17.394 "rw_mbytes_per_sec": 0, 00:20:17.394 "r_mbytes_per_sec": 0, 00:20:17.394 "w_mbytes_per_sec": 0 00:20:17.394 }, 00:20:17.394 "claimed": true, 00:20:17.394 "claim_type": "exclusive_write", 00:20:17.394 "zoned": false, 00:20:17.394 "supported_io_types": { 00:20:17.394 "read": true, 00:20:17.394 "write": true, 00:20:17.394 "unmap": true, 00:20:17.394 "flush": true, 00:20:17.394 "reset": true, 00:20:17.394 "nvme_admin": false, 00:20:17.394 "nvme_io": false, 00:20:17.394 "nvme_io_md": false, 00:20:17.394 "write_zeroes": true, 00:20:17.394 "zcopy": true, 00:20:17.394 "get_zone_info": false, 00:20:17.394 "zone_management": false, 00:20:17.394 "zone_append": false, 00:20:17.394 "compare": false, 00:20:17.394 "compare_and_write": false, 00:20:17.394 "abort": true, 00:20:17.394 "seek_hole": false, 00:20:17.394 "seek_data": false, 00:20:17.394 "copy": true, 00:20:17.394 "nvme_iov_md": false 00:20:17.394 }, 00:20:17.394 "memory_domains": [ 00:20:17.394 { 00:20:17.394 "dma_device_id": "system", 00:20:17.394 "dma_device_type": 1 00:20:17.394 }, 00:20:17.394 { 00:20:17.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.394 "dma_device_type": 2 00:20:17.394 } 00:20:17.394 ], 00:20:17.394 "driver_specific": {} 00:20:17.394 } 00:20:17.394 ] 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.394 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.653 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.653 "name": "Existed_Raid", 00:20:17.653 "uuid": "f56be1b6-7127-4ffb-8124-9833871a9abd", 00:20:17.653 "strip_size_kb": 64, 00:20:17.653 "state": "online", 00:20:17.653 "raid_level": "raid5f", 00:20:17.653 "superblock": false, 00:20:17.653 "num_base_bdevs": 3, 00:20:17.653 "num_base_bdevs_discovered": 3, 00:20:17.653 "num_base_bdevs_operational": 3, 00:20:17.653 "base_bdevs_list": [ 00:20:17.653 { 00:20:17.653 "name": "BaseBdev1", 00:20:17.653 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:17.653 "is_configured": true, 00:20:17.653 "data_offset": 0, 00:20:17.653 "data_size": 65536 00:20:17.653 }, 00:20:17.653 { 00:20:17.653 "name": "BaseBdev2", 00:20:17.653 "uuid": "1015b70a-f099-4fde-8289-2c7f41b3ef4e", 00:20:17.653 "is_configured": true, 00:20:17.653 "data_offset": 0, 00:20:17.653 "data_size": 65536 00:20:17.653 }, 00:20:17.653 { 00:20:17.653 "name": "BaseBdev3", 00:20:17.653 "uuid": "6549e683-c069-404e-9c0f-268adc5968db", 00:20:17.653 "is_configured": true, 00:20:17.653 "data_offset": 0, 00:20:17.653 "data_size": 65536 00:20:17.653 } 00:20:17.653 ] 00:20:17.653 }' 00:20:17.653 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.653 06:15:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:18.222 06:15:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:18.222 [2024-08-13 06:15:19.998521] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:18.482 "name": "Existed_Raid", 00:20:18.482 "aliases": [ 00:20:18.482 "f56be1b6-7127-4ffb-8124-9833871a9abd" 00:20:18.482 ], 00:20:18.482 "product_name": "Raid Volume", 00:20:18.482 "block_size": 512, 00:20:18.482 "num_blocks": 131072, 00:20:18.482 "uuid": "f56be1b6-7127-4ffb-8124-9833871a9abd", 00:20:18.482 "assigned_rate_limits": { 00:20:18.482 "rw_ios_per_sec": 0, 00:20:18.482 "rw_mbytes_per_sec": 0, 00:20:18.482 "r_mbytes_per_sec": 0, 00:20:18.482 "w_mbytes_per_sec": 0 00:20:18.482 }, 00:20:18.482 "claimed": false, 00:20:18.482 "zoned": false, 00:20:18.482 "supported_io_types": { 00:20:18.482 "read": true, 00:20:18.482 "write": true, 00:20:18.482 "unmap": false, 00:20:18.482 "flush": false, 00:20:18.482 "reset": true, 00:20:18.482 "nvme_admin": false, 00:20:18.482 "nvme_io": false, 00:20:18.482 "nvme_io_md": false, 00:20:18.482 "write_zeroes": true, 00:20:18.482 "zcopy": false, 00:20:18.482 "get_zone_info": false, 00:20:18.482 "zone_management": false, 00:20:18.482 "zone_append": false, 00:20:18.482 "compare": false, 00:20:18.482 "compare_and_write": false, 00:20:18.482 "abort": false, 00:20:18.482 "seek_hole": false, 00:20:18.482 "seek_data": false, 00:20:18.482 "copy": false, 00:20:18.482 "nvme_iov_md": false 00:20:18.482 }, 00:20:18.482 "driver_specific": { 00:20:18.482 "raid": { 00:20:18.482 "uuid": "f56be1b6-7127-4ffb-8124-9833871a9abd", 00:20:18.482 "strip_size_kb": 64, 00:20:18.482 "state": "online", 00:20:18.482 "raid_level": "raid5f", 00:20:18.482 "superblock": false, 00:20:18.482 "num_base_bdevs": 3, 00:20:18.482 "num_base_bdevs_discovered": 3, 00:20:18.482 "num_base_bdevs_operational": 3, 00:20:18.482 "base_bdevs_list": [ 00:20:18.482 { 00:20:18.482 "name": "BaseBdev1", 00:20:18.482 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:18.482 "is_configured": true, 00:20:18.482 "data_offset": 0, 00:20:18.482 "data_size": 65536 00:20:18.482 }, 00:20:18.482 { 00:20:18.482 "name": "BaseBdev2", 00:20:18.482 "uuid": "1015b70a-f099-4fde-8289-2c7f41b3ef4e", 00:20:18.482 "is_configured": true, 00:20:18.482 "data_offset": 0, 00:20:18.482 "data_size": 65536 00:20:18.482 }, 00:20:18.482 { 00:20:18.482 "name": "BaseBdev3", 00:20:18.482 "uuid": "6549e683-c069-404e-9c0f-268adc5968db", 00:20:18.482 "is_configured": true, 00:20:18.482 "data_offset": 0, 00:20:18.482 "data_size": 65536 00:20:18.482 } 00:20:18.482 ] 00:20:18.482 } 00:20:18.482 } 00:20:18.482 }' 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:18.482 BaseBdev2 00:20:18.482 BaseBdev3' 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.482 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.482 "name": "BaseBdev1", 00:20:18.482 "aliases": [ 00:20:18.482 "33b27a47-3398-4f0c-b2d3-e4433c795248" 00:20:18.482 ], 00:20:18.482 "product_name": "Malloc disk", 00:20:18.482 "block_size": 512, 00:20:18.482 "num_blocks": 65536, 00:20:18.482 "uuid": "33b27a47-3398-4f0c-b2d3-e4433c795248", 00:20:18.482 "assigned_rate_limits": { 00:20:18.482 "rw_ios_per_sec": 0, 00:20:18.482 "rw_mbytes_per_sec": 0, 00:20:18.482 "r_mbytes_per_sec": 0, 00:20:18.482 "w_mbytes_per_sec": 0 00:20:18.482 }, 00:20:18.482 "claimed": true, 00:20:18.482 "claim_type": "exclusive_write", 00:20:18.482 "zoned": false, 00:20:18.482 "supported_io_types": { 00:20:18.482 "read": true, 00:20:18.482 "write": true, 00:20:18.482 "unmap": true, 00:20:18.482 "flush": true, 00:20:18.482 "reset": true, 00:20:18.482 "nvme_admin": false, 00:20:18.482 "nvme_io": false, 00:20:18.482 "nvme_io_md": false, 00:20:18.482 "write_zeroes": true, 00:20:18.482 "zcopy": true, 00:20:18.482 "get_zone_info": false, 00:20:18.482 "zone_management": false, 00:20:18.482 "zone_append": false, 00:20:18.482 "compare": false, 00:20:18.482 "compare_and_write": false, 00:20:18.482 "abort": true, 00:20:18.482 "seek_hole": false, 00:20:18.482 "seek_data": false, 00:20:18.482 "copy": true, 00:20:18.482 "nvme_iov_md": false 00:20:18.482 }, 00:20:18.482 "memory_domains": [ 00:20:18.482 { 00:20:18.482 "dma_device_id": "system", 00:20:18.482 "dma_device_type": 1 00:20:18.482 }, 00:20:18.482 { 00:20:18.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.482 "dma_device_type": 2 00:20:18.482 } 00:20:18.482 ], 00:20:18.482 "driver_specific": {} 00:20:18.482 }' 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:18.834 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:19.093 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:19.093 "name": "BaseBdev2", 00:20:19.093 "aliases": [ 00:20:19.093 "1015b70a-f099-4fde-8289-2c7f41b3ef4e" 00:20:19.093 ], 00:20:19.093 "product_name": "Malloc disk", 00:20:19.093 "block_size": 512, 00:20:19.093 "num_blocks": 65536, 00:20:19.094 "uuid": "1015b70a-f099-4fde-8289-2c7f41b3ef4e", 00:20:19.094 "assigned_rate_limits": { 00:20:19.094 "rw_ios_per_sec": 0, 00:20:19.094 "rw_mbytes_per_sec": 0, 00:20:19.094 "r_mbytes_per_sec": 0, 00:20:19.094 "w_mbytes_per_sec": 0 00:20:19.094 }, 00:20:19.094 "claimed": true, 00:20:19.094 "claim_type": "exclusive_write", 00:20:19.094 "zoned": false, 00:20:19.094 "supported_io_types": { 00:20:19.094 "read": true, 00:20:19.094 "write": true, 00:20:19.094 "unmap": true, 00:20:19.094 "flush": true, 00:20:19.094 "reset": true, 00:20:19.094 "nvme_admin": false, 00:20:19.094 "nvme_io": false, 00:20:19.094 "nvme_io_md": false, 00:20:19.094 "write_zeroes": true, 00:20:19.094 "zcopy": true, 00:20:19.094 "get_zone_info": false, 00:20:19.094 "zone_management": false, 00:20:19.094 "zone_append": false, 00:20:19.094 "compare": false, 00:20:19.094 "compare_and_write": false, 00:20:19.094 "abort": true, 00:20:19.094 "seek_hole": false, 00:20:19.094 "seek_data": false, 00:20:19.094 "copy": true, 00:20:19.094 "nvme_iov_md": false 00:20:19.094 }, 00:20:19.094 "memory_domains": [ 00:20:19.094 { 00:20:19.094 "dma_device_id": "system", 00:20:19.094 "dma_device_type": 1 00:20:19.094 }, 00:20:19.094 { 00:20:19.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.094 "dma_device_type": 2 00:20:19.094 } 00:20:19.094 ], 00:20:19.094 "driver_specific": {} 00:20:19.094 }' 00:20:19.094 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.094 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.094 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:19.353 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.353 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.353 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:19.353 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.353 06:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:19.353 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:19.612 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:19.612 "name": "BaseBdev3", 00:20:19.612 "aliases": [ 00:20:19.612 "6549e683-c069-404e-9c0f-268adc5968db" 00:20:19.612 ], 00:20:19.612 "product_name": "Malloc disk", 00:20:19.612 "block_size": 512, 00:20:19.612 "num_blocks": 65536, 00:20:19.612 "uuid": "6549e683-c069-404e-9c0f-268adc5968db", 00:20:19.612 "assigned_rate_limits": { 00:20:19.612 "rw_ios_per_sec": 0, 00:20:19.612 "rw_mbytes_per_sec": 0, 00:20:19.612 "r_mbytes_per_sec": 0, 00:20:19.612 "w_mbytes_per_sec": 0 00:20:19.612 }, 00:20:19.612 "claimed": true, 00:20:19.612 "claim_type": "exclusive_write", 00:20:19.612 "zoned": false, 00:20:19.612 "supported_io_types": { 00:20:19.612 "read": true, 00:20:19.612 "write": true, 00:20:19.612 "unmap": true, 00:20:19.612 "flush": true, 00:20:19.612 "reset": true, 00:20:19.612 "nvme_admin": false, 00:20:19.612 "nvme_io": false, 00:20:19.612 "nvme_io_md": false, 00:20:19.612 "write_zeroes": true, 00:20:19.612 "zcopy": true, 00:20:19.612 "get_zone_info": false, 00:20:19.612 "zone_management": false, 00:20:19.612 "zone_append": false, 00:20:19.612 "compare": false, 00:20:19.612 "compare_and_write": false, 00:20:19.612 "abort": true, 00:20:19.612 "seek_hole": false, 00:20:19.612 "seek_data": false, 00:20:19.612 "copy": true, 00:20:19.612 "nvme_iov_md": false 00:20:19.612 }, 00:20:19.612 "memory_domains": [ 00:20:19.612 { 00:20:19.612 "dma_device_id": "system", 00:20:19.612 "dma_device_type": 1 00:20:19.612 }, 00:20:19.612 { 00:20:19.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.612 "dma_device_type": 2 00:20:19.612 } 00:20:19.612 ], 00:20:19.612 "driver_specific": {} 00:20:19.612 }' 00:20:19.612 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.612 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.872 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:20.131 [2024-08-13 06:15:21.839740] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.131 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.132 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.132 06:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.391 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.391 "name": "Existed_Raid", 00:20:20.391 "uuid": "f56be1b6-7127-4ffb-8124-9833871a9abd", 00:20:20.391 "strip_size_kb": 64, 00:20:20.391 "state": "online", 00:20:20.391 "raid_level": "raid5f", 00:20:20.391 "superblock": false, 00:20:20.391 "num_base_bdevs": 3, 00:20:20.391 "num_base_bdevs_discovered": 2, 00:20:20.391 "num_base_bdevs_operational": 2, 00:20:20.391 "base_bdevs_list": [ 00:20:20.391 { 00:20:20.391 "name": null, 00:20:20.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.391 "is_configured": false, 00:20:20.391 "data_offset": 0, 00:20:20.391 "data_size": 65536 00:20:20.391 }, 00:20:20.391 { 00:20:20.391 "name": "BaseBdev2", 00:20:20.391 "uuid": "1015b70a-f099-4fde-8289-2c7f41b3ef4e", 00:20:20.391 "is_configured": true, 00:20:20.391 "data_offset": 0, 00:20:20.391 "data_size": 65536 00:20:20.391 }, 00:20:20.391 { 00:20:20.391 "name": "BaseBdev3", 00:20:20.391 "uuid": "6549e683-c069-404e-9c0f-268adc5968db", 00:20:20.391 "is_configured": true, 00:20:20.391 "data_offset": 0, 00:20:20.391 "data_size": 65536 00:20:20.391 } 00:20:20.391 ] 00:20:20.391 }' 00:20:20.391 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.391 06:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.960 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:20.960 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:20.960 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.960 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:21.220 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:21.220 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.220 06:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:21.220 [2024-08-13 06:15:22.989345] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.220 [2024-08-13 06:15:22.989494] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.220 [2024-08-13 06:15:23.000287] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.479 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:21.479 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:21.479 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.479 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:21.738 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:21.739 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.739 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:21.739 [2024-08-13 06:15:23.447557] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:21.739 [2024-08-13 06:15:23.447607] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:20:21.739 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:21.739 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:21.739 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.739 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:21.998 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:21.999 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:21.999 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:21.999 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:21.999 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:21.999 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:22.259 BaseBdev2 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:22.259 06:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.518 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:22.518 [ 00:20:22.518 { 00:20:22.518 "name": "BaseBdev2", 00:20:22.518 "aliases": [ 00:20:22.518 "12c75368-b8dd-4ab0-ba80-c5659eab16a3" 00:20:22.518 ], 00:20:22.518 "product_name": "Malloc disk", 00:20:22.518 "block_size": 512, 00:20:22.518 "num_blocks": 65536, 00:20:22.518 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:22.518 "assigned_rate_limits": { 00:20:22.518 "rw_ios_per_sec": 0, 00:20:22.518 "rw_mbytes_per_sec": 0, 00:20:22.518 "r_mbytes_per_sec": 0, 00:20:22.518 "w_mbytes_per_sec": 0 00:20:22.518 }, 00:20:22.518 "claimed": false, 00:20:22.518 "zoned": false, 00:20:22.518 "supported_io_types": { 00:20:22.518 "read": true, 00:20:22.518 "write": true, 00:20:22.518 "unmap": true, 00:20:22.518 "flush": true, 00:20:22.518 "reset": true, 00:20:22.518 "nvme_admin": false, 00:20:22.518 "nvme_io": false, 00:20:22.518 "nvme_io_md": false, 00:20:22.518 "write_zeroes": true, 00:20:22.518 "zcopy": true, 00:20:22.518 "get_zone_info": false, 00:20:22.518 "zone_management": false, 00:20:22.518 "zone_append": false, 00:20:22.518 "compare": false, 00:20:22.518 "compare_and_write": false, 00:20:22.518 "abort": true, 00:20:22.518 "seek_hole": false, 00:20:22.518 "seek_data": false, 00:20:22.518 "copy": true, 00:20:22.518 "nvme_iov_md": false 00:20:22.518 }, 00:20:22.518 "memory_domains": [ 00:20:22.518 { 00:20:22.518 "dma_device_id": "system", 00:20:22.518 "dma_device_type": 1 00:20:22.518 }, 00:20:22.518 { 00:20:22.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.518 "dma_device_type": 2 00:20:22.518 } 00:20:22.518 ], 00:20:22.519 "driver_specific": {} 00:20:22.519 } 00:20:22.519 ] 00:20:22.519 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:22.519 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:22.519 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:22.519 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:22.778 BaseBdev3 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:22.778 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:23.038 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:23.297 [ 00:20:23.297 { 00:20:23.297 "name": "BaseBdev3", 00:20:23.297 "aliases": [ 00:20:23.297 "cd7387d6-6760-4fd5-b336-216471e7ed82" 00:20:23.297 ], 00:20:23.297 "product_name": "Malloc disk", 00:20:23.297 "block_size": 512, 00:20:23.297 "num_blocks": 65536, 00:20:23.297 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:23.297 "assigned_rate_limits": { 00:20:23.297 "rw_ios_per_sec": 0, 00:20:23.297 "rw_mbytes_per_sec": 0, 00:20:23.297 "r_mbytes_per_sec": 0, 00:20:23.297 "w_mbytes_per_sec": 0 00:20:23.297 }, 00:20:23.297 "claimed": false, 00:20:23.297 "zoned": false, 00:20:23.297 "supported_io_types": { 00:20:23.297 "read": true, 00:20:23.297 "write": true, 00:20:23.297 "unmap": true, 00:20:23.297 "flush": true, 00:20:23.297 "reset": true, 00:20:23.297 "nvme_admin": false, 00:20:23.297 "nvme_io": false, 00:20:23.297 "nvme_io_md": false, 00:20:23.297 "write_zeroes": true, 00:20:23.297 "zcopy": true, 00:20:23.297 "get_zone_info": false, 00:20:23.297 "zone_management": false, 00:20:23.297 "zone_append": false, 00:20:23.297 "compare": false, 00:20:23.297 "compare_and_write": false, 00:20:23.297 "abort": true, 00:20:23.298 "seek_hole": false, 00:20:23.298 "seek_data": false, 00:20:23.298 "copy": true, 00:20:23.298 "nvme_iov_md": false 00:20:23.298 }, 00:20:23.298 "memory_domains": [ 00:20:23.298 { 00:20:23.298 "dma_device_id": "system", 00:20:23.298 "dma_device_type": 1 00:20:23.298 }, 00:20:23.298 { 00:20:23.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.298 "dma_device_type": 2 00:20:23.298 } 00:20:23.298 ], 00:20:23.298 "driver_specific": {} 00:20:23.298 } 00:20:23.298 ] 00:20:23.298 06:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:23.298 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:23.298 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:23.298 06:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:23.298 [2024-08-13 06:15:25.049022] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:23.298 [2024-08-13 06:15:25.049134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:23.298 [2024-08-13 06:15:25.049155] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.298 [2024-08-13 06:15:25.050847] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.298 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.557 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.557 "name": "Existed_Raid", 00:20:23.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.557 "strip_size_kb": 64, 00:20:23.557 "state": "configuring", 00:20:23.557 "raid_level": "raid5f", 00:20:23.557 "superblock": false, 00:20:23.557 "num_base_bdevs": 3, 00:20:23.557 "num_base_bdevs_discovered": 2, 00:20:23.557 "num_base_bdevs_operational": 3, 00:20:23.557 "base_bdevs_list": [ 00:20:23.557 { 00:20:23.557 "name": "BaseBdev1", 00:20:23.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.557 "is_configured": false, 00:20:23.557 "data_offset": 0, 00:20:23.557 "data_size": 0 00:20:23.557 }, 00:20:23.557 { 00:20:23.557 "name": "BaseBdev2", 00:20:23.557 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:23.557 "is_configured": true, 00:20:23.557 "data_offset": 0, 00:20:23.557 "data_size": 65536 00:20:23.557 }, 00:20:23.557 { 00:20:23.557 "name": "BaseBdev3", 00:20:23.557 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:23.557 "is_configured": true, 00:20:23.557 "data_offset": 0, 00:20:23.557 "data_size": 65536 00:20:23.557 } 00:20:23.557 ] 00:20:23.557 }' 00:20:23.557 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.557 06:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.127 06:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:24.386 [2024-08-13 06:15:26.015395] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.386 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.646 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.646 "name": "Existed_Raid", 00:20:24.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.646 "strip_size_kb": 64, 00:20:24.646 "state": "configuring", 00:20:24.646 "raid_level": "raid5f", 00:20:24.646 "superblock": false, 00:20:24.646 "num_base_bdevs": 3, 00:20:24.646 "num_base_bdevs_discovered": 1, 00:20:24.646 "num_base_bdevs_operational": 3, 00:20:24.646 "base_bdevs_list": [ 00:20:24.646 { 00:20:24.646 "name": "BaseBdev1", 00:20:24.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.646 "is_configured": false, 00:20:24.646 "data_offset": 0, 00:20:24.646 "data_size": 0 00:20:24.646 }, 00:20:24.646 { 00:20:24.646 "name": null, 00:20:24.646 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:24.646 "is_configured": false, 00:20:24.646 "data_offset": 0, 00:20:24.646 "data_size": 65536 00:20:24.646 }, 00:20:24.646 { 00:20:24.646 "name": "BaseBdev3", 00:20:24.646 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:24.646 "is_configured": true, 00:20:24.646 "data_offset": 0, 00:20:24.646 "data_size": 65536 00:20:24.646 } 00:20:24.646 ] 00:20:24.646 }' 00:20:24.646 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.646 06:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.216 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.216 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:25.216 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:25.216 06:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.476 [2024-08-13 06:15:27.156747] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.476 BaseBdev1 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:25.476 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:25.736 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:25.996 [ 00:20:25.996 { 00:20:25.996 "name": "BaseBdev1", 00:20:25.996 "aliases": [ 00:20:25.996 "f1ca346b-4319-47b4-bce4-b037af8b5c9a" 00:20:25.996 ], 00:20:25.996 "product_name": "Malloc disk", 00:20:25.996 "block_size": 512, 00:20:25.996 "num_blocks": 65536, 00:20:25.996 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:25.996 "assigned_rate_limits": { 00:20:25.996 "rw_ios_per_sec": 0, 00:20:25.996 "rw_mbytes_per_sec": 0, 00:20:25.996 "r_mbytes_per_sec": 0, 00:20:25.996 "w_mbytes_per_sec": 0 00:20:25.996 }, 00:20:25.996 "claimed": true, 00:20:25.996 "claim_type": "exclusive_write", 00:20:25.996 "zoned": false, 00:20:25.996 "supported_io_types": { 00:20:25.996 "read": true, 00:20:25.996 "write": true, 00:20:25.996 "unmap": true, 00:20:25.996 "flush": true, 00:20:25.996 "reset": true, 00:20:25.996 "nvme_admin": false, 00:20:25.996 "nvme_io": false, 00:20:25.996 "nvme_io_md": false, 00:20:25.996 "write_zeroes": true, 00:20:25.996 "zcopy": true, 00:20:25.996 "get_zone_info": false, 00:20:25.996 "zone_management": false, 00:20:25.996 "zone_append": false, 00:20:25.996 "compare": false, 00:20:25.996 "compare_and_write": false, 00:20:25.996 "abort": true, 00:20:25.996 "seek_hole": false, 00:20:25.996 "seek_data": false, 00:20:25.996 "copy": true, 00:20:25.996 "nvme_iov_md": false 00:20:25.996 }, 00:20:25.996 "memory_domains": [ 00:20:25.996 { 00:20:25.996 "dma_device_id": "system", 00:20:25.996 "dma_device_type": 1 00:20:25.996 }, 00:20:25.996 { 00:20:25.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.996 "dma_device_type": 2 00:20:25.996 } 00:20:25.996 ], 00:20:25.996 "driver_specific": {} 00:20:25.996 } 00:20:25.996 ] 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.996 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.256 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.256 "name": "Existed_Raid", 00:20:26.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.256 "strip_size_kb": 64, 00:20:26.256 "state": "configuring", 00:20:26.256 "raid_level": "raid5f", 00:20:26.256 "superblock": false, 00:20:26.256 "num_base_bdevs": 3, 00:20:26.256 "num_base_bdevs_discovered": 2, 00:20:26.256 "num_base_bdevs_operational": 3, 00:20:26.256 "base_bdevs_list": [ 00:20:26.256 { 00:20:26.256 "name": "BaseBdev1", 00:20:26.256 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:26.256 "is_configured": true, 00:20:26.256 "data_offset": 0, 00:20:26.256 "data_size": 65536 00:20:26.256 }, 00:20:26.256 { 00:20:26.256 "name": null, 00:20:26.256 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:26.256 "is_configured": false, 00:20:26.256 "data_offset": 0, 00:20:26.256 "data_size": 65536 00:20:26.256 }, 00:20:26.256 { 00:20:26.256 "name": "BaseBdev3", 00:20:26.256 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:26.256 "is_configured": true, 00:20:26.256 "data_offset": 0, 00:20:26.256 "data_size": 65536 00:20:26.256 } 00:20:26.256 ] 00:20:26.256 }' 00:20:26.256 06:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.256 06:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.824 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:26.824 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.824 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:26.824 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:27.084 [2024-08-13 06:15:28.746200] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:27.084 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:27.084 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.084 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.084 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.085 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.344 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.344 "name": "Existed_Raid", 00:20:27.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.344 "strip_size_kb": 64, 00:20:27.344 "state": "configuring", 00:20:27.344 "raid_level": "raid5f", 00:20:27.344 "superblock": false, 00:20:27.344 "num_base_bdevs": 3, 00:20:27.344 "num_base_bdevs_discovered": 1, 00:20:27.344 "num_base_bdevs_operational": 3, 00:20:27.344 "base_bdevs_list": [ 00:20:27.344 { 00:20:27.344 "name": "BaseBdev1", 00:20:27.344 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:27.344 "is_configured": true, 00:20:27.344 "data_offset": 0, 00:20:27.344 "data_size": 65536 00:20:27.344 }, 00:20:27.344 { 00:20:27.344 "name": null, 00:20:27.344 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:27.344 "is_configured": false, 00:20:27.344 "data_offset": 0, 00:20:27.344 "data_size": 65536 00:20:27.344 }, 00:20:27.344 { 00:20:27.344 "name": null, 00:20:27.344 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:27.344 "is_configured": false, 00:20:27.344 "data_offset": 0, 00:20:27.344 "data_size": 65536 00:20:27.344 } 00:20:27.344 ] 00:20:27.344 }' 00:20:27.344 06:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.344 06:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.912 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:27.912 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:28.171 [2024-08-13 06:15:29.936262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.171 06:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.431 06:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.431 "name": "Existed_Raid", 00:20:28.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.431 "strip_size_kb": 64, 00:20:28.431 "state": "configuring", 00:20:28.431 "raid_level": "raid5f", 00:20:28.431 "superblock": false, 00:20:28.431 "num_base_bdevs": 3, 00:20:28.431 "num_base_bdevs_discovered": 2, 00:20:28.431 "num_base_bdevs_operational": 3, 00:20:28.431 "base_bdevs_list": [ 00:20:28.431 { 00:20:28.431 "name": "BaseBdev1", 00:20:28.431 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:28.431 "is_configured": true, 00:20:28.431 "data_offset": 0, 00:20:28.431 "data_size": 65536 00:20:28.431 }, 00:20:28.431 { 00:20:28.431 "name": null, 00:20:28.431 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:28.431 "is_configured": false, 00:20:28.431 "data_offset": 0, 00:20:28.431 "data_size": 65536 00:20:28.431 }, 00:20:28.431 { 00:20:28.431 "name": "BaseBdev3", 00:20:28.431 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:28.431 "is_configured": true, 00:20:28.431 "data_offset": 0, 00:20:28.431 "data_size": 65536 00:20:28.431 } 00:20:28.431 ] 00:20:28.431 }' 00:20:28.431 06:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.431 06:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.000 06:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.000 06:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:29.259 06:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:29.259 06:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:29.518 [2024-08-13 06:15:31.090332] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.518 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.777 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.778 "name": "Existed_Raid", 00:20:29.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.778 "strip_size_kb": 64, 00:20:29.778 "state": "configuring", 00:20:29.778 "raid_level": "raid5f", 00:20:29.778 "superblock": false, 00:20:29.778 "num_base_bdevs": 3, 00:20:29.778 "num_base_bdevs_discovered": 1, 00:20:29.778 "num_base_bdevs_operational": 3, 00:20:29.778 "base_bdevs_list": [ 00:20:29.778 { 00:20:29.778 "name": null, 00:20:29.778 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:29.778 "is_configured": false, 00:20:29.778 "data_offset": 0, 00:20:29.778 "data_size": 65536 00:20:29.778 }, 00:20:29.778 { 00:20:29.778 "name": null, 00:20:29.778 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:29.778 "is_configured": false, 00:20:29.778 "data_offset": 0, 00:20:29.778 "data_size": 65536 00:20:29.778 }, 00:20:29.778 { 00:20:29.778 "name": "BaseBdev3", 00:20:29.778 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:29.778 "is_configured": true, 00:20:29.778 "data_offset": 0, 00:20:29.778 "data_size": 65536 00:20:29.778 } 00:20:29.778 ] 00:20:29.778 }' 00:20:29.778 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.778 06:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.346 06:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:30.346 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:30.346 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:30.605 [2024-08-13 06:15:32.278909] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.605 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.864 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.864 "name": "Existed_Raid", 00:20:30.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.864 "strip_size_kb": 64, 00:20:30.864 "state": "configuring", 00:20:30.864 "raid_level": "raid5f", 00:20:30.864 "superblock": false, 00:20:30.864 "num_base_bdevs": 3, 00:20:30.864 "num_base_bdevs_discovered": 2, 00:20:30.864 "num_base_bdevs_operational": 3, 00:20:30.864 "base_bdevs_list": [ 00:20:30.864 { 00:20:30.864 "name": null, 00:20:30.864 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:30.864 "is_configured": false, 00:20:30.864 "data_offset": 0, 00:20:30.864 "data_size": 65536 00:20:30.864 }, 00:20:30.864 { 00:20:30.864 "name": "BaseBdev2", 00:20:30.864 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:30.864 "is_configured": true, 00:20:30.864 "data_offset": 0, 00:20:30.864 "data_size": 65536 00:20:30.864 }, 00:20:30.864 { 00:20:30.864 "name": "BaseBdev3", 00:20:30.864 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:30.864 "is_configured": true, 00:20:30.864 "data_offset": 0, 00:20:30.864 "data_size": 65536 00:20:30.864 } 00:20:30.864 ] 00:20:30.864 }' 00:20:30.864 06:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.864 06:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.433 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.433 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:31.691 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:31.691 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.692 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:31.692 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f1ca346b-4319-47b4-bce4-b037af8b5c9a 00:20:31.951 [2024-08-13 06:15:33.675781] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:31.951 [2024-08-13 06:15:33.675832] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:20:31.951 [2024-08-13 06:15:33.675839] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:31.951 [2024-08-13 06:15:33.676068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:20:31.951 NewBaseBdev 00:20:31.951 [2024-08-13 06:15:33.676422] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:20:31.951 [2024-08-13 06:15:33.676460] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:20:31.951 [2024-08-13 06:15:33.676621] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:31.951 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:32.210 06:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:32.469 [ 00:20:32.469 { 00:20:32.469 "name": "NewBaseBdev", 00:20:32.469 "aliases": [ 00:20:32.469 "f1ca346b-4319-47b4-bce4-b037af8b5c9a" 00:20:32.469 ], 00:20:32.469 "product_name": "Malloc disk", 00:20:32.469 "block_size": 512, 00:20:32.469 "num_blocks": 65536, 00:20:32.469 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:32.469 "assigned_rate_limits": { 00:20:32.469 "rw_ios_per_sec": 0, 00:20:32.469 "rw_mbytes_per_sec": 0, 00:20:32.469 "r_mbytes_per_sec": 0, 00:20:32.469 "w_mbytes_per_sec": 0 00:20:32.469 }, 00:20:32.469 "claimed": true, 00:20:32.469 "claim_type": "exclusive_write", 00:20:32.469 "zoned": false, 00:20:32.469 "supported_io_types": { 00:20:32.469 "read": true, 00:20:32.469 "write": true, 00:20:32.469 "unmap": true, 00:20:32.469 "flush": true, 00:20:32.469 "reset": true, 00:20:32.469 "nvme_admin": false, 00:20:32.469 "nvme_io": false, 00:20:32.469 "nvme_io_md": false, 00:20:32.469 "write_zeroes": true, 00:20:32.469 "zcopy": true, 00:20:32.469 "get_zone_info": false, 00:20:32.469 "zone_management": false, 00:20:32.469 "zone_append": false, 00:20:32.469 "compare": false, 00:20:32.469 "compare_and_write": false, 00:20:32.469 "abort": true, 00:20:32.469 "seek_hole": false, 00:20:32.469 "seek_data": false, 00:20:32.469 "copy": true, 00:20:32.469 "nvme_iov_md": false 00:20:32.469 }, 00:20:32.469 "memory_domains": [ 00:20:32.469 { 00:20:32.469 "dma_device_id": "system", 00:20:32.469 "dma_device_type": 1 00:20:32.469 }, 00:20:32.469 { 00:20:32.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.469 "dma_device_type": 2 00:20:32.469 } 00:20:32.469 ], 00:20:32.469 "driver_specific": {} 00:20:32.469 } 00:20:32.469 ] 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.469 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.728 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.728 "name": "Existed_Raid", 00:20:32.728 "uuid": "b58005d8-b7c6-4ef2-aa74-6a353ccc147c", 00:20:32.728 "strip_size_kb": 64, 00:20:32.728 "state": "online", 00:20:32.728 "raid_level": "raid5f", 00:20:32.728 "superblock": false, 00:20:32.728 "num_base_bdevs": 3, 00:20:32.728 "num_base_bdevs_discovered": 3, 00:20:32.728 "num_base_bdevs_operational": 3, 00:20:32.728 "base_bdevs_list": [ 00:20:32.728 { 00:20:32.728 "name": "NewBaseBdev", 00:20:32.728 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:32.728 "is_configured": true, 00:20:32.728 "data_offset": 0, 00:20:32.728 "data_size": 65536 00:20:32.728 }, 00:20:32.728 { 00:20:32.728 "name": "BaseBdev2", 00:20:32.728 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:32.728 "is_configured": true, 00:20:32.728 "data_offset": 0, 00:20:32.728 "data_size": 65536 00:20:32.728 }, 00:20:32.728 { 00:20:32.728 "name": "BaseBdev3", 00:20:32.728 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:32.728 "is_configured": true, 00:20:32.728 "data_offset": 0, 00:20:32.728 "data_size": 65536 00:20:32.728 } 00:20:32.728 ] 00:20:32.728 }' 00:20:32.728 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.728 06:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:33.297 [2024-08-13 06:15:34.973836] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:33.297 "name": "Existed_Raid", 00:20:33.297 "aliases": [ 00:20:33.297 "b58005d8-b7c6-4ef2-aa74-6a353ccc147c" 00:20:33.297 ], 00:20:33.297 "product_name": "Raid Volume", 00:20:33.297 "block_size": 512, 00:20:33.297 "num_blocks": 131072, 00:20:33.297 "uuid": "b58005d8-b7c6-4ef2-aa74-6a353ccc147c", 00:20:33.297 "assigned_rate_limits": { 00:20:33.297 "rw_ios_per_sec": 0, 00:20:33.297 "rw_mbytes_per_sec": 0, 00:20:33.297 "r_mbytes_per_sec": 0, 00:20:33.297 "w_mbytes_per_sec": 0 00:20:33.297 }, 00:20:33.297 "claimed": false, 00:20:33.297 "zoned": false, 00:20:33.297 "supported_io_types": { 00:20:33.297 "read": true, 00:20:33.297 "write": true, 00:20:33.297 "unmap": false, 00:20:33.297 "flush": false, 00:20:33.297 "reset": true, 00:20:33.297 "nvme_admin": false, 00:20:33.297 "nvme_io": false, 00:20:33.297 "nvme_io_md": false, 00:20:33.297 "write_zeroes": true, 00:20:33.297 "zcopy": false, 00:20:33.297 "get_zone_info": false, 00:20:33.297 "zone_management": false, 00:20:33.297 "zone_append": false, 00:20:33.297 "compare": false, 00:20:33.297 "compare_and_write": false, 00:20:33.297 "abort": false, 00:20:33.297 "seek_hole": false, 00:20:33.297 "seek_data": false, 00:20:33.297 "copy": false, 00:20:33.297 "nvme_iov_md": false 00:20:33.297 }, 00:20:33.297 "driver_specific": { 00:20:33.297 "raid": { 00:20:33.297 "uuid": "b58005d8-b7c6-4ef2-aa74-6a353ccc147c", 00:20:33.297 "strip_size_kb": 64, 00:20:33.297 "state": "online", 00:20:33.297 "raid_level": "raid5f", 00:20:33.297 "superblock": false, 00:20:33.297 "num_base_bdevs": 3, 00:20:33.297 "num_base_bdevs_discovered": 3, 00:20:33.297 "num_base_bdevs_operational": 3, 00:20:33.297 "base_bdevs_list": [ 00:20:33.297 { 00:20:33.297 "name": "NewBaseBdev", 00:20:33.297 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:33.297 "is_configured": true, 00:20:33.297 "data_offset": 0, 00:20:33.297 "data_size": 65536 00:20:33.297 }, 00:20:33.297 { 00:20:33.297 "name": "BaseBdev2", 00:20:33.297 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:33.297 "is_configured": true, 00:20:33.297 "data_offset": 0, 00:20:33.297 "data_size": 65536 00:20:33.297 }, 00:20:33.297 { 00:20:33.297 "name": "BaseBdev3", 00:20:33.297 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:33.297 "is_configured": true, 00:20:33.297 "data_offset": 0, 00:20:33.297 "data_size": 65536 00:20:33.297 } 00:20:33.297 ] 00:20:33.297 } 00:20:33.297 } 00:20:33.297 }' 00:20:33.297 06:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:33.297 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:33.297 BaseBdev2 00:20:33.297 BaseBdev3' 00:20:33.297 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:33.298 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:33.298 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:33.557 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:33.557 "name": "NewBaseBdev", 00:20:33.557 "aliases": [ 00:20:33.557 "f1ca346b-4319-47b4-bce4-b037af8b5c9a" 00:20:33.557 ], 00:20:33.557 "product_name": "Malloc disk", 00:20:33.557 "block_size": 512, 00:20:33.557 "num_blocks": 65536, 00:20:33.557 "uuid": "f1ca346b-4319-47b4-bce4-b037af8b5c9a", 00:20:33.557 "assigned_rate_limits": { 00:20:33.557 "rw_ios_per_sec": 0, 00:20:33.557 "rw_mbytes_per_sec": 0, 00:20:33.557 "r_mbytes_per_sec": 0, 00:20:33.557 "w_mbytes_per_sec": 0 00:20:33.557 }, 00:20:33.557 "claimed": true, 00:20:33.557 "claim_type": "exclusive_write", 00:20:33.557 "zoned": false, 00:20:33.557 "supported_io_types": { 00:20:33.557 "read": true, 00:20:33.557 "write": true, 00:20:33.557 "unmap": true, 00:20:33.557 "flush": true, 00:20:33.557 "reset": true, 00:20:33.557 "nvme_admin": false, 00:20:33.557 "nvme_io": false, 00:20:33.557 "nvme_io_md": false, 00:20:33.557 "write_zeroes": true, 00:20:33.557 "zcopy": true, 00:20:33.557 "get_zone_info": false, 00:20:33.557 "zone_management": false, 00:20:33.557 "zone_append": false, 00:20:33.557 "compare": false, 00:20:33.557 "compare_and_write": false, 00:20:33.557 "abort": true, 00:20:33.557 "seek_hole": false, 00:20:33.557 "seek_data": false, 00:20:33.557 "copy": true, 00:20:33.557 "nvme_iov_md": false 00:20:33.557 }, 00:20:33.557 "memory_domains": [ 00:20:33.557 { 00:20:33.557 "dma_device_id": "system", 00:20:33.557 "dma_device_type": 1 00:20:33.557 }, 00:20:33.557 { 00:20:33.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.557 "dma_device_type": 2 00:20:33.557 } 00:20:33.557 ], 00:20:33.557 "driver_specific": {} 00:20:33.557 }' 00:20:33.557 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.557 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.815 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.074 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:34.074 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.074 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:34.074 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:34.074 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:34.074 "name": "BaseBdev2", 00:20:34.074 "aliases": [ 00:20:34.074 "12c75368-b8dd-4ab0-ba80-c5659eab16a3" 00:20:34.074 ], 00:20:34.074 "product_name": "Malloc disk", 00:20:34.074 "block_size": 512, 00:20:34.074 "num_blocks": 65536, 00:20:34.074 "uuid": "12c75368-b8dd-4ab0-ba80-c5659eab16a3", 00:20:34.074 "assigned_rate_limits": { 00:20:34.074 "rw_ios_per_sec": 0, 00:20:34.074 "rw_mbytes_per_sec": 0, 00:20:34.074 "r_mbytes_per_sec": 0, 00:20:34.074 "w_mbytes_per_sec": 0 00:20:34.074 }, 00:20:34.074 "claimed": true, 00:20:34.074 "claim_type": "exclusive_write", 00:20:34.074 "zoned": false, 00:20:34.074 "supported_io_types": { 00:20:34.074 "read": true, 00:20:34.074 "write": true, 00:20:34.074 "unmap": true, 00:20:34.074 "flush": true, 00:20:34.074 "reset": true, 00:20:34.074 "nvme_admin": false, 00:20:34.074 "nvme_io": false, 00:20:34.074 "nvme_io_md": false, 00:20:34.074 "write_zeroes": true, 00:20:34.074 "zcopy": true, 00:20:34.074 "get_zone_info": false, 00:20:34.074 "zone_management": false, 00:20:34.074 "zone_append": false, 00:20:34.074 "compare": false, 00:20:34.074 "compare_and_write": false, 00:20:34.074 "abort": true, 00:20:34.074 "seek_hole": false, 00:20:34.074 "seek_data": false, 00:20:34.074 "copy": true, 00:20:34.075 "nvme_iov_md": false 00:20:34.075 }, 00:20:34.075 "memory_domains": [ 00:20:34.075 { 00:20:34.075 "dma_device_id": "system", 00:20:34.075 "dma_device_type": 1 00:20:34.075 }, 00:20:34.075 { 00:20:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.075 "dma_device_type": 2 00:20:34.075 } 00:20:34.075 ], 00:20:34.075 "driver_specific": {} 00:20:34.075 }' 00:20:34.075 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.333 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.333 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:34.333 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.333 06:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.333 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:34.333 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:34.333 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:34.592 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:34.851 "name": "BaseBdev3", 00:20:34.851 "aliases": [ 00:20:34.851 "cd7387d6-6760-4fd5-b336-216471e7ed82" 00:20:34.851 ], 00:20:34.851 "product_name": "Malloc disk", 00:20:34.851 "block_size": 512, 00:20:34.851 "num_blocks": 65536, 00:20:34.851 "uuid": "cd7387d6-6760-4fd5-b336-216471e7ed82", 00:20:34.851 "assigned_rate_limits": { 00:20:34.851 "rw_ios_per_sec": 0, 00:20:34.851 "rw_mbytes_per_sec": 0, 00:20:34.851 "r_mbytes_per_sec": 0, 00:20:34.851 "w_mbytes_per_sec": 0 00:20:34.851 }, 00:20:34.851 "claimed": true, 00:20:34.851 "claim_type": "exclusive_write", 00:20:34.851 "zoned": false, 00:20:34.851 "supported_io_types": { 00:20:34.851 "read": true, 00:20:34.851 "write": true, 00:20:34.851 "unmap": true, 00:20:34.851 "flush": true, 00:20:34.851 "reset": true, 00:20:34.851 "nvme_admin": false, 00:20:34.851 "nvme_io": false, 00:20:34.851 "nvme_io_md": false, 00:20:34.851 "write_zeroes": true, 00:20:34.851 "zcopy": true, 00:20:34.851 "get_zone_info": false, 00:20:34.851 "zone_management": false, 00:20:34.851 "zone_append": false, 00:20:34.851 "compare": false, 00:20:34.851 "compare_and_write": false, 00:20:34.851 "abort": true, 00:20:34.851 "seek_hole": false, 00:20:34.851 "seek_data": false, 00:20:34.851 "copy": true, 00:20:34.851 "nvme_iov_md": false 00:20:34.851 }, 00:20:34.851 "memory_domains": [ 00:20:34.851 { 00:20:34.851 "dma_device_id": "system", 00:20:34.851 "dma_device_type": 1 00:20:34.851 }, 00:20:34.851 { 00:20:34.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.851 "dma_device_type": 2 00:20:34.851 } 00:20:34.851 ], 00:20:34.851 "driver_specific": {} 00:20:34.851 }' 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:34.851 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.115 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.115 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.115 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.115 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.115 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.115 06:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:35.390 [2024-08-13 06:15:36.970331] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.390 [2024-08-13 06:15:36.970442] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:35.390 [2024-08-13 06:15:36.970528] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.390 [2024-08-13 06:15:36.970778] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.390 [2024-08-13 06:15:36.970811] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 97228 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 97228 ']' 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 97228 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97228 00:20:35.390 killing process with pid 97228 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97228' 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 97228 00:20:35.390 [2024-08-13 06:15:37.046139] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.390 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 97228 00:20:35.390 [2024-08-13 06:15:37.076909] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:35.666 00:20:35.666 real 0m24.925s 00:20:35.666 user 0m45.862s 00:20:35.666 sys 0m4.210s 00:20:35.666 ************************************ 00:20:35.666 END TEST raid5f_state_function_test 00:20:35.666 ************************************ 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 06:15:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:35.666 06:15:37 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:35.666 06:15:37 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:35.666 06:15:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 ************************************ 00:20:35.666 START TEST raid5f_state_function_test_sb 00:20:35.666 ************************************ 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 true 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:35.666 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=98129 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 98129' 00:20:35.667 Process raid pid: 98129 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 98129 /var/tmp/spdk-raid.sock 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 98129 ']' 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.667 06:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.926 [2024-08-13 06:15:37.537544] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:20:35.926 [2024-08-13 06:15:37.537845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.926 [2024-08-13 06:15:37.690723] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.185 [2024-08-13 06:15:37.736817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.185 [2024-08-13 06:15:37.779264] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.185 [2024-08-13 06:15:37.779381] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:36.753 [2024-08-13 06:15:38.507250] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.753 [2024-08-13 06:15:38.507351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.753 [2024-08-13 06:15:38.507368] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.753 [2024-08-13 06:15:38.507376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.753 [2024-08-13 06:15:38.507385] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.753 [2024-08-13 06:15:38.507392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.753 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.012 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.012 "name": "Existed_Raid", 00:20:37.012 "uuid": "f8900cf6-5e9a-4f2c-b944-921ab72ab574", 00:20:37.012 "strip_size_kb": 64, 00:20:37.012 "state": "configuring", 00:20:37.012 "raid_level": "raid5f", 00:20:37.012 "superblock": true, 00:20:37.012 "num_base_bdevs": 3, 00:20:37.012 "num_base_bdevs_discovered": 0, 00:20:37.012 "num_base_bdevs_operational": 3, 00:20:37.012 "base_bdevs_list": [ 00:20:37.012 { 00:20:37.012 "name": "BaseBdev1", 00:20:37.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.012 "is_configured": false, 00:20:37.012 "data_offset": 0, 00:20:37.012 "data_size": 0 00:20:37.012 }, 00:20:37.012 { 00:20:37.012 "name": "BaseBdev2", 00:20:37.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.012 "is_configured": false, 00:20:37.012 "data_offset": 0, 00:20:37.012 "data_size": 0 00:20:37.012 }, 00:20:37.012 { 00:20:37.012 "name": "BaseBdev3", 00:20:37.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.012 "is_configured": false, 00:20:37.012 "data_offset": 0, 00:20:37.012 "data_size": 0 00:20:37.012 } 00:20:37.012 ] 00:20:37.012 }' 00:20:37.012 06:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.012 06:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.579 06:15:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:37.839 [2024-08-13 06:15:39.385706] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.839 [2024-08-13 06:15:39.385779] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:20:37.839 06:15:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:37.839 [2024-08-13 06:15:39.585367] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:37.839 [2024-08-13 06:15:39.585441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:37.839 [2024-08-13 06:15:39.585466] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.839 [2024-08-13 06:15:39.585485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.839 [2024-08-13 06:15:39.585503] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:37.839 [2024-08-13 06:15:39.585520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:37.839 06:15:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.098 [2024-08-13 06:15:39.781671] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.098 BaseBdev1 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:38.098 06:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.357 06:15:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:38.617 [ 00:20:38.617 { 00:20:38.617 "name": "BaseBdev1", 00:20:38.617 "aliases": [ 00:20:38.617 "af2b966d-5fa7-46c8-adbc-325134a371b9" 00:20:38.617 ], 00:20:38.617 "product_name": "Malloc disk", 00:20:38.617 "block_size": 512, 00:20:38.617 "num_blocks": 65536, 00:20:38.617 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:38.617 "assigned_rate_limits": { 00:20:38.617 "rw_ios_per_sec": 0, 00:20:38.617 "rw_mbytes_per_sec": 0, 00:20:38.617 "r_mbytes_per_sec": 0, 00:20:38.617 "w_mbytes_per_sec": 0 00:20:38.617 }, 00:20:38.617 "claimed": true, 00:20:38.617 "claim_type": "exclusive_write", 00:20:38.617 "zoned": false, 00:20:38.617 "supported_io_types": { 00:20:38.617 "read": true, 00:20:38.617 "write": true, 00:20:38.617 "unmap": true, 00:20:38.617 "flush": true, 00:20:38.617 "reset": true, 00:20:38.617 "nvme_admin": false, 00:20:38.617 "nvme_io": false, 00:20:38.617 "nvme_io_md": false, 00:20:38.617 "write_zeroes": true, 00:20:38.617 "zcopy": true, 00:20:38.617 "get_zone_info": false, 00:20:38.617 "zone_management": false, 00:20:38.617 "zone_append": false, 00:20:38.617 "compare": false, 00:20:38.617 "compare_and_write": false, 00:20:38.617 "abort": true, 00:20:38.617 "seek_hole": false, 00:20:38.617 "seek_data": false, 00:20:38.617 "copy": true, 00:20:38.617 "nvme_iov_md": false 00:20:38.617 }, 00:20:38.617 "memory_domains": [ 00:20:38.617 { 00:20:38.617 "dma_device_id": "system", 00:20:38.617 "dma_device_type": 1 00:20:38.617 }, 00:20:38.617 { 00:20:38.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.617 "dma_device_type": 2 00:20:38.617 } 00:20:38.617 ], 00:20:38.617 "driver_specific": {} 00:20:38.617 } 00:20:38.617 ] 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.617 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.876 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.876 "name": "Existed_Raid", 00:20:38.876 "uuid": "bb558316-31fb-4c0b-bfda-47e138beaebc", 00:20:38.876 "strip_size_kb": 64, 00:20:38.876 "state": "configuring", 00:20:38.876 "raid_level": "raid5f", 00:20:38.876 "superblock": true, 00:20:38.876 "num_base_bdevs": 3, 00:20:38.876 "num_base_bdevs_discovered": 1, 00:20:38.876 "num_base_bdevs_operational": 3, 00:20:38.876 "base_bdevs_list": [ 00:20:38.876 { 00:20:38.876 "name": "BaseBdev1", 00:20:38.876 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:38.876 "is_configured": true, 00:20:38.876 "data_offset": 2048, 00:20:38.876 "data_size": 63488 00:20:38.876 }, 00:20:38.876 { 00:20:38.876 "name": "BaseBdev2", 00:20:38.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.876 "is_configured": false, 00:20:38.876 "data_offset": 0, 00:20:38.876 "data_size": 0 00:20:38.876 }, 00:20:38.876 { 00:20:38.876 "name": "BaseBdev3", 00:20:38.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.877 "is_configured": false, 00:20:38.877 "data_offset": 0, 00:20:38.877 "data_size": 0 00:20:38.877 } 00:20:38.877 ] 00:20:38.877 }' 00:20:38.877 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.877 06:15:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.136 06:15:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:39.395 [2024-08-13 06:15:41.047528] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:39.395 [2024-08-13 06:15:41.047640] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:20:39.395 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:39.654 [2024-08-13 06:15:41.239268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.654 [2024-08-13 06:15:41.240969] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:39.654 [2024-08-13 06:15:41.241047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:39.654 [2024-08-13 06:15:41.241078] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:39.654 [2024-08-13 06:15:41.241097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.654 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.914 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.914 "name": "Existed_Raid", 00:20:39.914 "uuid": "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86", 00:20:39.914 "strip_size_kb": 64, 00:20:39.914 "state": "configuring", 00:20:39.914 "raid_level": "raid5f", 00:20:39.914 "superblock": true, 00:20:39.914 "num_base_bdevs": 3, 00:20:39.914 "num_base_bdevs_discovered": 1, 00:20:39.914 "num_base_bdevs_operational": 3, 00:20:39.914 "base_bdevs_list": [ 00:20:39.914 { 00:20:39.914 "name": "BaseBdev1", 00:20:39.914 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:39.914 "is_configured": true, 00:20:39.914 "data_offset": 2048, 00:20:39.914 "data_size": 63488 00:20:39.914 }, 00:20:39.914 { 00:20:39.914 "name": "BaseBdev2", 00:20:39.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.914 "is_configured": false, 00:20:39.914 "data_offset": 0, 00:20:39.914 "data_size": 0 00:20:39.914 }, 00:20:39.914 { 00:20:39.914 "name": "BaseBdev3", 00:20:39.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.914 "is_configured": false, 00:20:39.914 "data_offset": 0, 00:20:39.914 "data_size": 0 00:20:39.914 } 00:20:39.914 ] 00:20:39.914 }' 00:20:39.914 06:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.914 06:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:40.483 [2024-08-13 06:15:42.231821] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.483 BaseBdev2 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:40.483 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:40.742 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:41.001 [ 00:20:41.001 { 00:20:41.001 "name": "BaseBdev2", 00:20:41.001 "aliases": [ 00:20:41.001 "1a4cdfbf-d266-433a-8ce6-94a77fb5820b" 00:20:41.001 ], 00:20:41.001 "product_name": "Malloc disk", 00:20:41.001 "block_size": 512, 00:20:41.001 "num_blocks": 65536, 00:20:41.001 "uuid": "1a4cdfbf-d266-433a-8ce6-94a77fb5820b", 00:20:41.001 "assigned_rate_limits": { 00:20:41.001 "rw_ios_per_sec": 0, 00:20:41.001 "rw_mbytes_per_sec": 0, 00:20:41.001 "r_mbytes_per_sec": 0, 00:20:41.001 "w_mbytes_per_sec": 0 00:20:41.001 }, 00:20:41.001 "claimed": true, 00:20:41.001 "claim_type": "exclusive_write", 00:20:41.001 "zoned": false, 00:20:41.001 "supported_io_types": { 00:20:41.001 "read": true, 00:20:41.001 "write": true, 00:20:41.001 "unmap": true, 00:20:41.001 "flush": true, 00:20:41.001 "reset": true, 00:20:41.001 "nvme_admin": false, 00:20:41.001 "nvme_io": false, 00:20:41.001 "nvme_io_md": false, 00:20:41.001 "write_zeroes": true, 00:20:41.001 "zcopy": true, 00:20:41.001 "get_zone_info": false, 00:20:41.001 "zone_management": false, 00:20:41.001 "zone_append": false, 00:20:41.001 "compare": false, 00:20:41.001 "compare_and_write": false, 00:20:41.001 "abort": true, 00:20:41.001 "seek_hole": false, 00:20:41.001 "seek_data": false, 00:20:41.001 "copy": true, 00:20:41.001 "nvme_iov_md": false 00:20:41.001 }, 00:20:41.001 "memory_domains": [ 00:20:41.001 { 00:20:41.001 "dma_device_id": "system", 00:20:41.001 "dma_device_type": 1 00:20:41.001 }, 00:20:41.001 { 00:20:41.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.001 "dma_device_type": 2 00:20:41.001 } 00:20:41.001 ], 00:20:41.001 "driver_specific": {} 00:20:41.001 } 00:20:41.001 ] 00:20:41.001 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.002 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.261 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.261 "name": "Existed_Raid", 00:20:41.261 "uuid": "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86", 00:20:41.261 "strip_size_kb": 64, 00:20:41.261 "state": "configuring", 00:20:41.261 "raid_level": "raid5f", 00:20:41.261 "superblock": true, 00:20:41.261 "num_base_bdevs": 3, 00:20:41.261 "num_base_bdevs_discovered": 2, 00:20:41.261 "num_base_bdevs_operational": 3, 00:20:41.261 "base_bdevs_list": [ 00:20:41.261 { 00:20:41.261 "name": "BaseBdev1", 00:20:41.261 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:41.261 "is_configured": true, 00:20:41.261 "data_offset": 2048, 00:20:41.261 "data_size": 63488 00:20:41.261 }, 00:20:41.261 { 00:20:41.261 "name": "BaseBdev2", 00:20:41.261 "uuid": "1a4cdfbf-d266-433a-8ce6-94a77fb5820b", 00:20:41.261 "is_configured": true, 00:20:41.261 "data_offset": 2048, 00:20:41.261 "data_size": 63488 00:20:41.261 }, 00:20:41.261 { 00:20:41.261 "name": "BaseBdev3", 00:20:41.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.261 "is_configured": false, 00:20:41.261 "data_offset": 0, 00:20:41.261 "data_size": 0 00:20:41.261 } 00:20:41.261 ] 00:20:41.261 }' 00:20:41.261 06:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.261 06:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.520 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:41.779 [2024-08-13 06:15:43.464523] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.779 [2024-08-13 06:15:43.464706] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:20:41.779 [2024-08-13 06:15:43.464726] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:41.779 [2024-08-13 06:15:43.464978] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:41.779 [2024-08-13 06:15:43.465392] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:20:41.779 [2024-08-13 06:15:43.465406] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:20:41.779 [2024-08-13 06:15:43.465523] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.779 BaseBdev3 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:41.779 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.045 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:42.311 [ 00:20:42.311 { 00:20:42.311 "name": "BaseBdev3", 00:20:42.311 "aliases": [ 00:20:42.311 "5b051e0e-3646-4c87-ad33-7b468ca0356a" 00:20:42.311 ], 00:20:42.311 "product_name": "Malloc disk", 00:20:42.311 "block_size": 512, 00:20:42.311 "num_blocks": 65536, 00:20:42.311 "uuid": "5b051e0e-3646-4c87-ad33-7b468ca0356a", 00:20:42.311 "assigned_rate_limits": { 00:20:42.311 "rw_ios_per_sec": 0, 00:20:42.311 "rw_mbytes_per_sec": 0, 00:20:42.311 "r_mbytes_per_sec": 0, 00:20:42.311 "w_mbytes_per_sec": 0 00:20:42.311 }, 00:20:42.311 "claimed": true, 00:20:42.311 "claim_type": "exclusive_write", 00:20:42.311 "zoned": false, 00:20:42.311 "supported_io_types": { 00:20:42.311 "read": true, 00:20:42.311 "write": true, 00:20:42.311 "unmap": true, 00:20:42.311 "flush": true, 00:20:42.311 "reset": true, 00:20:42.311 "nvme_admin": false, 00:20:42.311 "nvme_io": false, 00:20:42.311 "nvme_io_md": false, 00:20:42.311 "write_zeroes": true, 00:20:42.311 "zcopy": true, 00:20:42.311 "get_zone_info": false, 00:20:42.311 "zone_management": false, 00:20:42.311 "zone_append": false, 00:20:42.311 "compare": false, 00:20:42.311 "compare_and_write": false, 00:20:42.311 "abort": true, 00:20:42.311 "seek_hole": false, 00:20:42.311 "seek_data": false, 00:20:42.311 "copy": true, 00:20:42.311 "nvme_iov_md": false 00:20:42.311 }, 00:20:42.311 "memory_domains": [ 00:20:42.311 { 00:20:42.311 "dma_device_id": "system", 00:20:42.311 "dma_device_type": 1 00:20:42.311 }, 00:20:42.311 { 00:20:42.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.311 "dma_device_type": 2 00:20:42.311 } 00:20:42.311 ], 00:20:42.311 "driver_specific": {} 00:20:42.311 } 00:20:42.311 ] 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.311 06:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.570 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.570 "name": "Existed_Raid", 00:20:42.570 "uuid": "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86", 00:20:42.570 "strip_size_kb": 64, 00:20:42.570 "state": "online", 00:20:42.570 "raid_level": "raid5f", 00:20:42.570 "superblock": true, 00:20:42.570 "num_base_bdevs": 3, 00:20:42.570 "num_base_bdevs_discovered": 3, 00:20:42.570 "num_base_bdevs_operational": 3, 00:20:42.570 "base_bdevs_list": [ 00:20:42.570 { 00:20:42.570 "name": "BaseBdev1", 00:20:42.570 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:42.570 "is_configured": true, 00:20:42.570 "data_offset": 2048, 00:20:42.570 "data_size": 63488 00:20:42.570 }, 00:20:42.570 { 00:20:42.570 "name": "BaseBdev2", 00:20:42.570 "uuid": "1a4cdfbf-d266-433a-8ce6-94a77fb5820b", 00:20:42.570 "is_configured": true, 00:20:42.570 "data_offset": 2048, 00:20:42.570 "data_size": 63488 00:20:42.570 }, 00:20:42.570 { 00:20:42.570 "name": "BaseBdev3", 00:20:42.570 "uuid": "5b051e0e-3646-4c87-ad33-7b468ca0356a", 00:20:42.570 "is_configured": true, 00:20:42.570 "data_offset": 2048, 00:20:42.570 "data_size": 63488 00:20:42.570 } 00:20:42.570 ] 00:20:42.570 }' 00:20:42.570 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.570 06:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:43.138 [2024-08-13 06:15:44.826548] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:43.138 "name": "Existed_Raid", 00:20:43.138 "aliases": [ 00:20:43.138 "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86" 00:20:43.138 ], 00:20:43.138 "product_name": "Raid Volume", 00:20:43.138 "block_size": 512, 00:20:43.138 "num_blocks": 126976, 00:20:43.138 "uuid": "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86", 00:20:43.138 "assigned_rate_limits": { 00:20:43.138 "rw_ios_per_sec": 0, 00:20:43.138 "rw_mbytes_per_sec": 0, 00:20:43.138 "r_mbytes_per_sec": 0, 00:20:43.138 "w_mbytes_per_sec": 0 00:20:43.138 }, 00:20:43.138 "claimed": false, 00:20:43.138 "zoned": false, 00:20:43.138 "supported_io_types": { 00:20:43.138 "read": true, 00:20:43.138 "write": true, 00:20:43.138 "unmap": false, 00:20:43.138 "flush": false, 00:20:43.138 "reset": true, 00:20:43.138 "nvme_admin": false, 00:20:43.138 "nvme_io": false, 00:20:43.138 "nvme_io_md": false, 00:20:43.138 "write_zeroes": true, 00:20:43.138 "zcopy": false, 00:20:43.138 "get_zone_info": false, 00:20:43.138 "zone_management": false, 00:20:43.138 "zone_append": false, 00:20:43.138 "compare": false, 00:20:43.138 "compare_and_write": false, 00:20:43.138 "abort": false, 00:20:43.138 "seek_hole": false, 00:20:43.138 "seek_data": false, 00:20:43.138 "copy": false, 00:20:43.138 "nvme_iov_md": false 00:20:43.138 }, 00:20:43.138 "driver_specific": { 00:20:43.138 "raid": { 00:20:43.138 "uuid": "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86", 00:20:43.138 "strip_size_kb": 64, 00:20:43.138 "state": "online", 00:20:43.138 "raid_level": "raid5f", 00:20:43.138 "superblock": true, 00:20:43.138 "num_base_bdevs": 3, 00:20:43.138 "num_base_bdevs_discovered": 3, 00:20:43.138 "num_base_bdevs_operational": 3, 00:20:43.138 "base_bdevs_list": [ 00:20:43.138 { 00:20:43.138 "name": "BaseBdev1", 00:20:43.138 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:43.138 "is_configured": true, 00:20:43.138 "data_offset": 2048, 00:20:43.138 "data_size": 63488 00:20:43.138 }, 00:20:43.138 { 00:20:43.138 "name": "BaseBdev2", 00:20:43.138 "uuid": "1a4cdfbf-d266-433a-8ce6-94a77fb5820b", 00:20:43.138 "is_configured": true, 00:20:43.138 "data_offset": 2048, 00:20:43.138 "data_size": 63488 00:20:43.138 }, 00:20:43.138 { 00:20:43.138 "name": "BaseBdev3", 00:20:43.138 "uuid": "5b051e0e-3646-4c87-ad33-7b468ca0356a", 00:20:43.138 "is_configured": true, 00:20:43.138 "data_offset": 2048, 00:20:43.138 "data_size": 63488 00:20:43.138 } 00:20:43.138 ] 00:20:43.138 } 00:20:43.138 } 00:20:43.138 }' 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:43.138 BaseBdev2 00:20:43.138 BaseBdev3' 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:43.138 06:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:43.398 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:43.398 "name": "BaseBdev1", 00:20:43.398 "aliases": [ 00:20:43.398 "af2b966d-5fa7-46c8-adbc-325134a371b9" 00:20:43.398 ], 00:20:43.398 "product_name": "Malloc disk", 00:20:43.398 "block_size": 512, 00:20:43.398 "num_blocks": 65536, 00:20:43.398 "uuid": "af2b966d-5fa7-46c8-adbc-325134a371b9", 00:20:43.398 "assigned_rate_limits": { 00:20:43.398 "rw_ios_per_sec": 0, 00:20:43.398 "rw_mbytes_per_sec": 0, 00:20:43.398 "r_mbytes_per_sec": 0, 00:20:43.398 "w_mbytes_per_sec": 0 00:20:43.398 }, 00:20:43.398 "claimed": true, 00:20:43.398 "claim_type": "exclusive_write", 00:20:43.398 "zoned": false, 00:20:43.398 "supported_io_types": { 00:20:43.398 "read": true, 00:20:43.398 "write": true, 00:20:43.398 "unmap": true, 00:20:43.398 "flush": true, 00:20:43.398 "reset": true, 00:20:43.398 "nvme_admin": false, 00:20:43.398 "nvme_io": false, 00:20:43.398 "nvme_io_md": false, 00:20:43.398 "write_zeroes": true, 00:20:43.398 "zcopy": true, 00:20:43.398 "get_zone_info": false, 00:20:43.398 "zone_management": false, 00:20:43.398 "zone_append": false, 00:20:43.398 "compare": false, 00:20:43.398 "compare_and_write": false, 00:20:43.398 "abort": true, 00:20:43.398 "seek_hole": false, 00:20:43.398 "seek_data": false, 00:20:43.398 "copy": true, 00:20:43.398 "nvme_iov_md": false 00:20:43.398 }, 00:20:43.398 "memory_domains": [ 00:20:43.398 { 00:20:43.398 "dma_device_id": "system", 00:20:43.398 "dma_device_type": 1 00:20:43.398 }, 00:20:43.398 { 00:20:43.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.398 "dma_device_type": 2 00:20:43.398 } 00:20:43.398 ], 00:20:43.398 "driver_specific": {} 00:20:43.398 }' 00:20:43.398 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.398 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.657 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.916 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.916 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.916 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:43.916 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:43.916 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:44.175 "name": "BaseBdev2", 00:20:44.175 "aliases": [ 00:20:44.175 "1a4cdfbf-d266-433a-8ce6-94a77fb5820b" 00:20:44.175 ], 00:20:44.175 "product_name": "Malloc disk", 00:20:44.175 "block_size": 512, 00:20:44.175 "num_blocks": 65536, 00:20:44.175 "uuid": "1a4cdfbf-d266-433a-8ce6-94a77fb5820b", 00:20:44.175 "assigned_rate_limits": { 00:20:44.175 "rw_ios_per_sec": 0, 00:20:44.175 "rw_mbytes_per_sec": 0, 00:20:44.175 "r_mbytes_per_sec": 0, 00:20:44.175 "w_mbytes_per_sec": 0 00:20:44.175 }, 00:20:44.175 "claimed": true, 00:20:44.175 "claim_type": "exclusive_write", 00:20:44.175 "zoned": false, 00:20:44.175 "supported_io_types": { 00:20:44.175 "read": true, 00:20:44.175 "write": true, 00:20:44.175 "unmap": true, 00:20:44.175 "flush": true, 00:20:44.175 "reset": true, 00:20:44.175 "nvme_admin": false, 00:20:44.175 "nvme_io": false, 00:20:44.175 "nvme_io_md": false, 00:20:44.175 "write_zeroes": true, 00:20:44.175 "zcopy": true, 00:20:44.175 "get_zone_info": false, 00:20:44.175 "zone_management": false, 00:20:44.175 "zone_append": false, 00:20:44.175 "compare": false, 00:20:44.175 "compare_and_write": false, 00:20:44.175 "abort": true, 00:20:44.175 "seek_hole": false, 00:20:44.175 "seek_data": false, 00:20:44.175 "copy": true, 00:20:44.175 "nvme_iov_md": false 00:20:44.175 }, 00:20:44.175 "memory_domains": [ 00:20:44.175 { 00:20:44.175 "dma_device_id": "system", 00:20:44.175 "dma_device_type": 1 00:20:44.175 }, 00:20:44.175 { 00:20:44.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.175 "dma_device_type": 2 00:20:44.175 } 00:20:44.175 ], 00:20:44.175 "driver_specific": {} 00:20:44.175 }' 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.175 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.434 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:44.434 06:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.434 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.434 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:44.434 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:44.434 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:44.434 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:44.694 "name": "BaseBdev3", 00:20:44.694 "aliases": [ 00:20:44.694 "5b051e0e-3646-4c87-ad33-7b468ca0356a" 00:20:44.694 ], 00:20:44.694 "product_name": "Malloc disk", 00:20:44.694 "block_size": 512, 00:20:44.694 "num_blocks": 65536, 00:20:44.694 "uuid": "5b051e0e-3646-4c87-ad33-7b468ca0356a", 00:20:44.694 "assigned_rate_limits": { 00:20:44.694 "rw_ios_per_sec": 0, 00:20:44.694 "rw_mbytes_per_sec": 0, 00:20:44.694 "r_mbytes_per_sec": 0, 00:20:44.694 "w_mbytes_per_sec": 0 00:20:44.694 }, 00:20:44.694 "claimed": true, 00:20:44.694 "claim_type": "exclusive_write", 00:20:44.694 "zoned": false, 00:20:44.694 "supported_io_types": { 00:20:44.694 "read": true, 00:20:44.694 "write": true, 00:20:44.694 "unmap": true, 00:20:44.694 "flush": true, 00:20:44.694 "reset": true, 00:20:44.694 "nvme_admin": false, 00:20:44.694 "nvme_io": false, 00:20:44.694 "nvme_io_md": false, 00:20:44.694 "write_zeroes": true, 00:20:44.694 "zcopy": true, 00:20:44.694 "get_zone_info": false, 00:20:44.694 "zone_management": false, 00:20:44.694 "zone_append": false, 00:20:44.694 "compare": false, 00:20:44.694 "compare_and_write": false, 00:20:44.694 "abort": true, 00:20:44.694 "seek_hole": false, 00:20:44.694 "seek_data": false, 00:20:44.694 "copy": true, 00:20:44.694 "nvme_iov_md": false 00:20:44.694 }, 00:20:44.694 "memory_domains": [ 00:20:44.694 { 00:20:44.694 "dma_device_id": "system", 00:20:44.694 "dma_device_type": 1 00:20:44.694 }, 00:20:44.694 { 00:20:44.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.694 "dma_device_type": 2 00:20:44.694 } 00:20:44.694 ], 00:20:44.694 "driver_specific": {} 00:20:44.694 }' 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.694 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.953 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:44.953 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.953 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.953 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:44.953 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:45.213 [2024-08-13 06:15:46.791221] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.213 06:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.472 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.472 "name": "Existed_Raid", 00:20:45.472 "uuid": "5aa6e687-2f7f-4b87-bfa7-4d9bb1249a86", 00:20:45.472 "strip_size_kb": 64, 00:20:45.472 "state": "online", 00:20:45.472 "raid_level": "raid5f", 00:20:45.472 "superblock": true, 00:20:45.472 "num_base_bdevs": 3, 00:20:45.472 "num_base_bdevs_discovered": 2, 00:20:45.472 "num_base_bdevs_operational": 2, 00:20:45.472 "base_bdevs_list": [ 00:20:45.472 { 00:20:45.472 "name": null, 00:20:45.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.472 "is_configured": false, 00:20:45.472 "data_offset": 2048, 00:20:45.472 "data_size": 63488 00:20:45.472 }, 00:20:45.472 { 00:20:45.473 "name": "BaseBdev2", 00:20:45.473 "uuid": "1a4cdfbf-d266-433a-8ce6-94a77fb5820b", 00:20:45.473 "is_configured": true, 00:20:45.473 "data_offset": 2048, 00:20:45.473 "data_size": 63488 00:20:45.473 }, 00:20:45.473 { 00:20:45.473 "name": "BaseBdev3", 00:20:45.473 "uuid": "5b051e0e-3646-4c87-ad33-7b468ca0356a", 00:20:45.473 "is_configured": true, 00:20:45.473 "data_offset": 2048, 00:20:45.473 "data_size": 63488 00:20:45.473 } 00:20:45.473 ] 00:20:45.473 }' 00:20:45.473 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.473 06:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.041 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:46.041 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:46.041 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:46.041 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.301 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:46.301 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.301 06:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:46.301 [2024-08-13 06:15:47.992189] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.301 [2024-08-13 06:15:47.992322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.301 [2024-08-13 06:15:48.002787] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.301 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:46.301 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:46.301 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.301 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:46.560 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:46.560 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.560 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:46.820 [2024-08-13 06:15:48.406183] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:46.820 [2024-08-13 06:15:48.406276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:20:46.820 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:46.820 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:46.820 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.820 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:47.080 BaseBdev2 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:47.080 06:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.339 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.599 [ 00:20:47.599 { 00:20:47.599 "name": "BaseBdev2", 00:20:47.599 "aliases": [ 00:20:47.599 "fcff100c-1a83-4d25-9641-e563860b4233" 00:20:47.599 ], 00:20:47.599 "product_name": "Malloc disk", 00:20:47.599 "block_size": 512, 00:20:47.599 "num_blocks": 65536, 00:20:47.599 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:47.599 "assigned_rate_limits": { 00:20:47.599 "rw_ios_per_sec": 0, 00:20:47.599 "rw_mbytes_per_sec": 0, 00:20:47.599 "r_mbytes_per_sec": 0, 00:20:47.599 "w_mbytes_per_sec": 0 00:20:47.599 }, 00:20:47.599 "claimed": false, 00:20:47.599 "zoned": false, 00:20:47.599 "supported_io_types": { 00:20:47.599 "read": true, 00:20:47.599 "write": true, 00:20:47.599 "unmap": true, 00:20:47.599 "flush": true, 00:20:47.599 "reset": true, 00:20:47.599 "nvme_admin": false, 00:20:47.599 "nvme_io": false, 00:20:47.599 "nvme_io_md": false, 00:20:47.599 "write_zeroes": true, 00:20:47.599 "zcopy": true, 00:20:47.599 "get_zone_info": false, 00:20:47.599 "zone_management": false, 00:20:47.599 "zone_append": false, 00:20:47.599 "compare": false, 00:20:47.599 "compare_and_write": false, 00:20:47.599 "abort": true, 00:20:47.599 "seek_hole": false, 00:20:47.599 "seek_data": false, 00:20:47.599 "copy": true, 00:20:47.599 "nvme_iov_md": false 00:20:47.599 }, 00:20:47.599 "memory_domains": [ 00:20:47.599 { 00:20:47.599 "dma_device_id": "system", 00:20:47.599 "dma_device_type": 1 00:20:47.599 }, 00:20:47.599 { 00:20:47.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.599 "dma_device_type": 2 00:20:47.599 } 00:20:47.599 ], 00:20:47.599 "driver_specific": {} 00:20:47.599 } 00:20:47.599 ] 00:20:47.599 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:47.599 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:47.599 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:47.599 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:47.859 BaseBdev3 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:47.859 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.119 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:48.119 [ 00:20:48.119 { 00:20:48.119 "name": "BaseBdev3", 00:20:48.119 "aliases": [ 00:20:48.119 "7940b15a-846e-4dfe-b780-ffd27169e5d0" 00:20:48.119 ], 00:20:48.119 "product_name": "Malloc disk", 00:20:48.119 "block_size": 512, 00:20:48.119 "num_blocks": 65536, 00:20:48.119 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:48.119 "assigned_rate_limits": { 00:20:48.119 "rw_ios_per_sec": 0, 00:20:48.119 "rw_mbytes_per_sec": 0, 00:20:48.119 "r_mbytes_per_sec": 0, 00:20:48.119 "w_mbytes_per_sec": 0 00:20:48.119 }, 00:20:48.119 "claimed": false, 00:20:48.119 "zoned": false, 00:20:48.120 "supported_io_types": { 00:20:48.120 "read": true, 00:20:48.120 "write": true, 00:20:48.120 "unmap": true, 00:20:48.120 "flush": true, 00:20:48.120 "reset": true, 00:20:48.120 "nvme_admin": false, 00:20:48.120 "nvme_io": false, 00:20:48.120 "nvme_io_md": false, 00:20:48.120 "write_zeroes": true, 00:20:48.120 "zcopy": true, 00:20:48.120 "get_zone_info": false, 00:20:48.120 "zone_management": false, 00:20:48.120 "zone_append": false, 00:20:48.120 "compare": false, 00:20:48.120 "compare_and_write": false, 00:20:48.120 "abort": true, 00:20:48.120 "seek_hole": false, 00:20:48.120 "seek_data": false, 00:20:48.120 "copy": true, 00:20:48.120 "nvme_iov_md": false 00:20:48.120 }, 00:20:48.120 "memory_domains": [ 00:20:48.120 { 00:20:48.120 "dma_device_id": "system", 00:20:48.120 "dma_device_type": 1 00:20:48.120 }, 00:20:48.120 { 00:20:48.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.120 "dma_device_type": 2 00:20:48.120 } 00:20:48.120 ], 00:20:48.120 "driver_specific": {} 00:20:48.120 } 00:20:48.120 ] 00:20:48.120 06:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:48.120 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:48.120 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:48.120 06:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:48.380 [2024-08-13 06:15:50.062023] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:48.380 [2024-08-13 06:15:50.062112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:48.380 [2024-08-13 06:15:50.062160] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:48.380 [2024-08-13 06:15:50.064304] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.380 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.640 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.640 "name": "Existed_Raid", 00:20:48.640 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:48.640 "strip_size_kb": 64, 00:20:48.640 "state": "configuring", 00:20:48.640 "raid_level": "raid5f", 00:20:48.640 "superblock": true, 00:20:48.640 "num_base_bdevs": 3, 00:20:48.640 "num_base_bdevs_discovered": 2, 00:20:48.640 "num_base_bdevs_operational": 3, 00:20:48.640 "base_bdevs_list": [ 00:20:48.640 { 00:20:48.640 "name": "BaseBdev1", 00:20:48.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.640 "is_configured": false, 00:20:48.640 "data_offset": 0, 00:20:48.640 "data_size": 0 00:20:48.640 }, 00:20:48.640 { 00:20:48.640 "name": "BaseBdev2", 00:20:48.640 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:48.640 "is_configured": true, 00:20:48.640 "data_offset": 2048, 00:20:48.640 "data_size": 63488 00:20:48.640 }, 00:20:48.640 { 00:20:48.640 "name": "BaseBdev3", 00:20:48.640 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:48.640 "is_configured": true, 00:20:48.640 "data_offset": 2048, 00:20:48.640 "data_size": 63488 00:20:48.640 } 00:20:48.640 ] 00:20:48.640 }' 00:20:48.640 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.640 06:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.210 06:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:49.469 [2024-08-13 06:15:51.006830] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.469 "name": "Existed_Raid", 00:20:49.469 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:49.469 "strip_size_kb": 64, 00:20:49.469 "state": "configuring", 00:20:49.469 "raid_level": "raid5f", 00:20:49.469 "superblock": true, 00:20:49.469 "num_base_bdevs": 3, 00:20:49.469 "num_base_bdevs_discovered": 1, 00:20:49.469 "num_base_bdevs_operational": 3, 00:20:49.469 "base_bdevs_list": [ 00:20:49.469 { 00:20:49.469 "name": "BaseBdev1", 00:20:49.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.469 "is_configured": false, 00:20:49.469 "data_offset": 0, 00:20:49.469 "data_size": 0 00:20:49.469 }, 00:20:49.469 { 00:20:49.469 "name": null, 00:20:49.469 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:49.469 "is_configured": false, 00:20:49.469 "data_offset": 2048, 00:20:49.469 "data_size": 63488 00:20:49.469 }, 00:20:49.469 { 00:20:49.469 "name": "BaseBdev3", 00:20:49.469 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:49.469 "is_configured": true, 00:20:49.469 "data_offset": 2048, 00:20:49.469 "data_size": 63488 00:20:49.469 } 00:20:49.469 ] 00:20:49.469 }' 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.469 06:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:50.039 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.298 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:50.298 06:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:50.558 [2024-08-13 06:15:52.134342] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.558 BaseBdev1 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:50.558 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:50.818 [ 00:20:50.818 { 00:20:50.818 "name": "BaseBdev1", 00:20:50.818 "aliases": [ 00:20:50.818 "2949c8cb-48e5-4e5d-9f9b-47bab12b697a" 00:20:50.818 ], 00:20:50.818 "product_name": "Malloc disk", 00:20:50.818 "block_size": 512, 00:20:50.818 "num_blocks": 65536, 00:20:50.818 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:50.818 "assigned_rate_limits": { 00:20:50.818 "rw_ios_per_sec": 0, 00:20:50.818 "rw_mbytes_per_sec": 0, 00:20:50.818 "r_mbytes_per_sec": 0, 00:20:50.818 "w_mbytes_per_sec": 0 00:20:50.818 }, 00:20:50.818 "claimed": true, 00:20:50.818 "claim_type": "exclusive_write", 00:20:50.818 "zoned": false, 00:20:50.818 "supported_io_types": { 00:20:50.818 "read": true, 00:20:50.818 "write": true, 00:20:50.818 "unmap": true, 00:20:50.818 "flush": true, 00:20:50.818 "reset": true, 00:20:50.818 "nvme_admin": false, 00:20:50.818 "nvme_io": false, 00:20:50.818 "nvme_io_md": false, 00:20:50.818 "write_zeroes": true, 00:20:50.818 "zcopy": true, 00:20:50.818 "get_zone_info": false, 00:20:50.818 "zone_management": false, 00:20:50.818 "zone_append": false, 00:20:50.818 "compare": false, 00:20:50.818 "compare_and_write": false, 00:20:50.818 "abort": true, 00:20:50.818 "seek_hole": false, 00:20:50.818 "seek_data": false, 00:20:50.818 "copy": true, 00:20:50.818 "nvme_iov_md": false 00:20:50.818 }, 00:20:50.818 "memory_domains": [ 00:20:50.818 { 00:20:50.818 "dma_device_id": "system", 00:20:50.818 "dma_device_type": 1 00:20:50.818 }, 00:20:50.818 { 00:20:50.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.818 "dma_device_type": 2 00:20:50.818 } 00:20:50.818 ], 00:20:50.818 "driver_specific": {} 00:20:50.818 } 00:20:50.818 ] 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.818 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.078 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.078 "name": "Existed_Raid", 00:20:51.078 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:51.078 "strip_size_kb": 64, 00:20:51.078 "state": "configuring", 00:20:51.078 "raid_level": "raid5f", 00:20:51.078 "superblock": true, 00:20:51.078 "num_base_bdevs": 3, 00:20:51.078 "num_base_bdevs_discovered": 2, 00:20:51.078 "num_base_bdevs_operational": 3, 00:20:51.078 "base_bdevs_list": [ 00:20:51.078 { 00:20:51.078 "name": "BaseBdev1", 00:20:51.078 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:51.078 "is_configured": true, 00:20:51.078 "data_offset": 2048, 00:20:51.078 "data_size": 63488 00:20:51.078 }, 00:20:51.078 { 00:20:51.078 "name": null, 00:20:51.078 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:51.078 "is_configured": false, 00:20:51.078 "data_offset": 2048, 00:20:51.078 "data_size": 63488 00:20:51.078 }, 00:20:51.078 { 00:20:51.078 "name": "BaseBdev3", 00:20:51.078 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:51.078 "is_configured": true, 00:20:51.078 "data_offset": 2048, 00:20:51.078 "data_size": 63488 00:20:51.078 } 00:20:51.078 ] 00:20:51.078 }' 00:20:51.078 06:15:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.078 06:15:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.646 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.646 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:51.905 [2024-08-13 06:15:53.640270] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.905 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.164 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.164 "name": "Existed_Raid", 00:20:52.164 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:52.164 "strip_size_kb": 64, 00:20:52.164 "state": "configuring", 00:20:52.164 "raid_level": "raid5f", 00:20:52.164 "superblock": true, 00:20:52.164 "num_base_bdevs": 3, 00:20:52.164 "num_base_bdevs_discovered": 1, 00:20:52.164 "num_base_bdevs_operational": 3, 00:20:52.164 "base_bdevs_list": [ 00:20:52.164 { 00:20:52.164 "name": "BaseBdev1", 00:20:52.164 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:52.164 "is_configured": true, 00:20:52.164 "data_offset": 2048, 00:20:52.164 "data_size": 63488 00:20:52.164 }, 00:20:52.164 { 00:20:52.164 "name": null, 00:20:52.164 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:52.164 "is_configured": false, 00:20:52.164 "data_offset": 2048, 00:20:52.164 "data_size": 63488 00:20:52.164 }, 00:20:52.164 { 00:20:52.164 "name": null, 00:20:52.164 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:52.164 "is_configured": false, 00:20:52.164 "data_offset": 2048, 00:20:52.164 "data_size": 63488 00:20:52.164 } 00:20:52.164 ] 00:20:52.164 }' 00:20:52.164 06:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.165 06:15:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.733 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:52.733 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:52.993 [2024-08-13 06:15:54.718608] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.993 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.253 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.253 "name": "Existed_Raid", 00:20:53.253 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:53.253 "strip_size_kb": 64, 00:20:53.253 "state": "configuring", 00:20:53.253 "raid_level": "raid5f", 00:20:53.253 "superblock": true, 00:20:53.253 "num_base_bdevs": 3, 00:20:53.253 "num_base_bdevs_discovered": 2, 00:20:53.253 "num_base_bdevs_operational": 3, 00:20:53.253 "base_bdevs_list": [ 00:20:53.253 { 00:20:53.253 "name": "BaseBdev1", 00:20:53.253 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:53.253 "is_configured": true, 00:20:53.253 "data_offset": 2048, 00:20:53.253 "data_size": 63488 00:20:53.253 }, 00:20:53.253 { 00:20:53.253 "name": null, 00:20:53.253 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:53.253 "is_configured": false, 00:20:53.253 "data_offset": 2048, 00:20:53.253 "data_size": 63488 00:20:53.253 }, 00:20:53.253 { 00:20:53.253 "name": "BaseBdev3", 00:20:53.253 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:53.253 "is_configured": true, 00:20:53.253 "data_offset": 2048, 00:20:53.253 "data_size": 63488 00:20:53.253 } 00:20:53.253 ] 00:20:53.253 }' 00:20:53.253 06:15:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.253 06:15:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.823 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:53.823 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.083 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:54.083 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:54.343 [2024-08-13 06:15:55.913059] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.343 06:15:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.603 06:15:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.603 "name": "Existed_Raid", 00:20:54.603 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:54.603 "strip_size_kb": 64, 00:20:54.603 "state": "configuring", 00:20:54.603 "raid_level": "raid5f", 00:20:54.603 "superblock": true, 00:20:54.603 "num_base_bdevs": 3, 00:20:54.603 "num_base_bdevs_discovered": 1, 00:20:54.603 "num_base_bdevs_operational": 3, 00:20:54.603 "base_bdevs_list": [ 00:20:54.603 { 00:20:54.603 "name": null, 00:20:54.603 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:54.603 "is_configured": false, 00:20:54.603 "data_offset": 2048, 00:20:54.603 "data_size": 63488 00:20:54.603 }, 00:20:54.603 { 00:20:54.603 "name": null, 00:20:54.603 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:54.603 "is_configured": false, 00:20:54.603 "data_offset": 2048, 00:20:54.603 "data_size": 63488 00:20:54.603 }, 00:20:54.603 { 00:20:54.603 "name": "BaseBdev3", 00:20:54.603 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:54.603 "is_configured": true, 00:20:54.603 "data_offset": 2048, 00:20:54.603 "data_size": 63488 00:20:54.603 } 00:20:54.603 ] 00:20:54.603 }' 00:20:54.603 06:15:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.603 06:15:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.172 06:15:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.172 06:15:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:55.172 06:15:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:55.172 06:15:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:55.432 [2024-08-13 06:15:57.130711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.432 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.691 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.691 "name": "Existed_Raid", 00:20:55.691 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:55.691 "strip_size_kb": 64, 00:20:55.691 "state": "configuring", 00:20:55.691 "raid_level": "raid5f", 00:20:55.691 "superblock": true, 00:20:55.691 "num_base_bdevs": 3, 00:20:55.691 "num_base_bdevs_discovered": 2, 00:20:55.691 "num_base_bdevs_operational": 3, 00:20:55.691 "base_bdevs_list": [ 00:20:55.691 { 00:20:55.691 "name": null, 00:20:55.691 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:55.691 "is_configured": false, 00:20:55.691 "data_offset": 2048, 00:20:55.691 "data_size": 63488 00:20:55.691 }, 00:20:55.691 { 00:20:55.691 "name": "BaseBdev2", 00:20:55.691 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:55.691 "is_configured": true, 00:20:55.691 "data_offset": 2048, 00:20:55.691 "data_size": 63488 00:20:55.691 }, 00:20:55.691 { 00:20:55.691 "name": "BaseBdev3", 00:20:55.691 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:55.691 "is_configured": true, 00:20:55.691 "data_offset": 2048, 00:20:55.691 "data_size": 63488 00:20:55.691 } 00:20:55.691 ] 00:20:55.691 }' 00:20:55.691 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.691 06:15:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.261 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.261 06:15:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:56.532 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:56.532 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:56.532 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2949c8cb-48e5-4e5d-9f9b-47bab12b697a 00:20:56.816 [2024-08-13 06:15:58.517117] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:56.816 [2024-08-13 06:15:58.517453] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:20:56.816 [2024-08-13 06:15:58.517471] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:56.816 [2024-08-13 06:15:58.517744] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:20:56.816 [2024-08-13 06:15:58.518201] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:20:56.816 [2024-08-13 06:15:58.518224] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:20:56.816 [2024-08-13 06:15:58.518339] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.816 NewBaseBdev 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:56.816 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.098 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:57.368 [ 00:20:57.368 { 00:20:57.368 "name": "NewBaseBdev", 00:20:57.368 "aliases": [ 00:20:57.368 "2949c8cb-48e5-4e5d-9f9b-47bab12b697a" 00:20:57.368 ], 00:20:57.368 "product_name": "Malloc disk", 00:20:57.368 "block_size": 512, 00:20:57.368 "num_blocks": 65536, 00:20:57.368 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:57.368 "assigned_rate_limits": { 00:20:57.368 "rw_ios_per_sec": 0, 00:20:57.368 "rw_mbytes_per_sec": 0, 00:20:57.368 "r_mbytes_per_sec": 0, 00:20:57.368 "w_mbytes_per_sec": 0 00:20:57.368 }, 00:20:57.368 "claimed": true, 00:20:57.368 "claim_type": "exclusive_write", 00:20:57.368 "zoned": false, 00:20:57.368 "supported_io_types": { 00:20:57.368 "read": true, 00:20:57.368 "write": true, 00:20:57.368 "unmap": true, 00:20:57.368 "flush": true, 00:20:57.368 "reset": true, 00:20:57.368 "nvme_admin": false, 00:20:57.368 "nvme_io": false, 00:20:57.368 "nvme_io_md": false, 00:20:57.368 "write_zeroes": true, 00:20:57.368 "zcopy": true, 00:20:57.368 "get_zone_info": false, 00:20:57.368 "zone_management": false, 00:20:57.368 "zone_append": false, 00:20:57.368 "compare": false, 00:20:57.368 "compare_and_write": false, 00:20:57.368 "abort": true, 00:20:57.368 "seek_hole": false, 00:20:57.368 "seek_data": false, 00:20:57.368 "copy": true, 00:20:57.368 "nvme_iov_md": false 00:20:57.368 }, 00:20:57.368 "memory_domains": [ 00:20:57.368 { 00:20:57.368 "dma_device_id": "system", 00:20:57.368 "dma_device_type": 1 00:20:57.368 }, 00:20:57.368 { 00:20:57.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.368 "dma_device_type": 2 00:20:57.368 } 00:20:57.368 ], 00:20:57.368 "driver_specific": {} 00:20:57.368 } 00:20:57.368 ] 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.368 06:15:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.368 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.368 "name": "Existed_Raid", 00:20:57.368 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:57.368 "strip_size_kb": 64, 00:20:57.368 "state": "online", 00:20:57.368 "raid_level": "raid5f", 00:20:57.368 "superblock": true, 00:20:57.368 "num_base_bdevs": 3, 00:20:57.368 "num_base_bdevs_discovered": 3, 00:20:57.368 "num_base_bdevs_operational": 3, 00:20:57.368 "base_bdevs_list": [ 00:20:57.368 { 00:20:57.368 "name": "NewBaseBdev", 00:20:57.368 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:57.368 "is_configured": true, 00:20:57.368 "data_offset": 2048, 00:20:57.368 "data_size": 63488 00:20:57.368 }, 00:20:57.368 { 00:20:57.368 "name": "BaseBdev2", 00:20:57.368 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:57.368 "is_configured": true, 00:20:57.368 "data_offset": 2048, 00:20:57.368 "data_size": 63488 00:20:57.368 }, 00:20:57.368 { 00:20:57.368 "name": "BaseBdev3", 00:20:57.368 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:57.368 "is_configured": true, 00:20:57.368 "data_offset": 2048, 00:20:57.368 "data_size": 63488 00:20:57.368 } 00:20:57.368 ] 00:20:57.368 }' 00:20:57.368 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.368 06:15:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:58.307 [2024-08-13 06:15:59.935037] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:58.307 "name": "Existed_Raid", 00:20:58.307 "aliases": [ 00:20:58.307 "25039e12-9bc0-460b-a97c-dc48cbcf875e" 00:20:58.307 ], 00:20:58.307 "product_name": "Raid Volume", 00:20:58.307 "block_size": 512, 00:20:58.307 "num_blocks": 126976, 00:20:58.307 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:58.307 "assigned_rate_limits": { 00:20:58.307 "rw_ios_per_sec": 0, 00:20:58.307 "rw_mbytes_per_sec": 0, 00:20:58.307 "r_mbytes_per_sec": 0, 00:20:58.307 "w_mbytes_per_sec": 0 00:20:58.307 }, 00:20:58.307 "claimed": false, 00:20:58.307 "zoned": false, 00:20:58.307 "supported_io_types": { 00:20:58.307 "read": true, 00:20:58.307 "write": true, 00:20:58.307 "unmap": false, 00:20:58.307 "flush": false, 00:20:58.307 "reset": true, 00:20:58.307 "nvme_admin": false, 00:20:58.307 "nvme_io": false, 00:20:58.307 "nvme_io_md": false, 00:20:58.307 "write_zeroes": true, 00:20:58.307 "zcopy": false, 00:20:58.307 "get_zone_info": false, 00:20:58.307 "zone_management": false, 00:20:58.307 "zone_append": false, 00:20:58.307 "compare": false, 00:20:58.307 "compare_and_write": false, 00:20:58.307 "abort": false, 00:20:58.307 "seek_hole": false, 00:20:58.307 "seek_data": false, 00:20:58.307 "copy": false, 00:20:58.307 "nvme_iov_md": false 00:20:58.307 }, 00:20:58.307 "driver_specific": { 00:20:58.307 "raid": { 00:20:58.307 "uuid": "25039e12-9bc0-460b-a97c-dc48cbcf875e", 00:20:58.307 "strip_size_kb": 64, 00:20:58.307 "state": "online", 00:20:58.307 "raid_level": "raid5f", 00:20:58.307 "superblock": true, 00:20:58.307 "num_base_bdevs": 3, 00:20:58.307 "num_base_bdevs_discovered": 3, 00:20:58.307 "num_base_bdevs_operational": 3, 00:20:58.307 "base_bdevs_list": [ 00:20:58.307 { 00:20:58.307 "name": "NewBaseBdev", 00:20:58.307 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:58.307 "is_configured": true, 00:20:58.307 "data_offset": 2048, 00:20:58.307 "data_size": 63488 00:20:58.307 }, 00:20:58.307 { 00:20:58.307 "name": "BaseBdev2", 00:20:58.307 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:58.307 "is_configured": true, 00:20:58.307 "data_offset": 2048, 00:20:58.307 "data_size": 63488 00:20:58.307 }, 00:20:58.307 { 00:20:58.307 "name": "BaseBdev3", 00:20:58.307 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:58.307 "is_configured": true, 00:20:58.307 "data_offset": 2048, 00:20:58.307 "data_size": 63488 00:20:58.307 } 00:20:58.307 ] 00:20:58.307 } 00:20:58.307 } 00:20:58.307 }' 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:58.307 BaseBdev2 00:20:58.307 BaseBdev3' 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.307 06:15:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:58.567 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.567 "name": "NewBaseBdev", 00:20:58.567 "aliases": [ 00:20:58.567 "2949c8cb-48e5-4e5d-9f9b-47bab12b697a" 00:20:58.567 ], 00:20:58.567 "product_name": "Malloc disk", 00:20:58.567 "block_size": 512, 00:20:58.567 "num_blocks": 65536, 00:20:58.567 "uuid": "2949c8cb-48e5-4e5d-9f9b-47bab12b697a", 00:20:58.567 "assigned_rate_limits": { 00:20:58.567 "rw_ios_per_sec": 0, 00:20:58.567 "rw_mbytes_per_sec": 0, 00:20:58.567 "r_mbytes_per_sec": 0, 00:20:58.567 "w_mbytes_per_sec": 0 00:20:58.567 }, 00:20:58.567 "claimed": true, 00:20:58.567 "claim_type": "exclusive_write", 00:20:58.567 "zoned": false, 00:20:58.567 "supported_io_types": { 00:20:58.567 "read": true, 00:20:58.567 "write": true, 00:20:58.567 "unmap": true, 00:20:58.567 "flush": true, 00:20:58.567 "reset": true, 00:20:58.567 "nvme_admin": false, 00:20:58.567 "nvme_io": false, 00:20:58.567 "nvme_io_md": false, 00:20:58.567 "write_zeroes": true, 00:20:58.567 "zcopy": true, 00:20:58.567 "get_zone_info": false, 00:20:58.567 "zone_management": false, 00:20:58.567 "zone_append": false, 00:20:58.567 "compare": false, 00:20:58.567 "compare_and_write": false, 00:20:58.567 "abort": true, 00:20:58.567 "seek_hole": false, 00:20:58.567 "seek_data": false, 00:20:58.567 "copy": true, 00:20:58.567 "nvme_iov_md": false 00:20:58.567 }, 00:20:58.567 "memory_domains": [ 00:20:58.567 { 00:20:58.567 "dma_device_id": "system", 00:20:58.567 "dma_device_type": 1 00:20:58.567 }, 00:20:58.567 { 00:20:58.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.567 "dma_device_type": 2 00:20:58.567 } 00:20:58.567 ], 00:20:58.567 "driver_specific": {} 00:20:58.567 }' 00:20:58.567 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.567 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.567 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.567 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.567 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:58.827 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:59.087 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:59.087 "name": "BaseBdev2", 00:20:59.087 "aliases": [ 00:20:59.087 "fcff100c-1a83-4d25-9641-e563860b4233" 00:20:59.087 ], 00:20:59.087 "product_name": "Malloc disk", 00:20:59.087 "block_size": 512, 00:20:59.087 "num_blocks": 65536, 00:20:59.087 "uuid": "fcff100c-1a83-4d25-9641-e563860b4233", 00:20:59.087 "assigned_rate_limits": { 00:20:59.087 "rw_ios_per_sec": 0, 00:20:59.087 "rw_mbytes_per_sec": 0, 00:20:59.087 "r_mbytes_per_sec": 0, 00:20:59.087 "w_mbytes_per_sec": 0 00:20:59.087 }, 00:20:59.087 "claimed": true, 00:20:59.087 "claim_type": "exclusive_write", 00:20:59.087 "zoned": false, 00:20:59.087 "supported_io_types": { 00:20:59.087 "read": true, 00:20:59.087 "write": true, 00:20:59.087 "unmap": true, 00:20:59.087 "flush": true, 00:20:59.087 "reset": true, 00:20:59.087 "nvme_admin": false, 00:20:59.087 "nvme_io": false, 00:20:59.087 "nvme_io_md": false, 00:20:59.087 "write_zeroes": true, 00:20:59.087 "zcopy": true, 00:20:59.087 "get_zone_info": false, 00:20:59.087 "zone_management": false, 00:20:59.087 "zone_append": false, 00:20:59.087 "compare": false, 00:20:59.087 "compare_and_write": false, 00:20:59.087 "abort": true, 00:20:59.087 "seek_hole": false, 00:20:59.087 "seek_data": false, 00:20:59.087 "copy": true, 00:20:59.087 "nvme_iov_md": false 00:20:59.087 }, 00:20:59.087 "memory_domains": [ 00:20:59.087 { 00:20:59.087 "dma_device_id": "system", 00:20:59.087 "dma_device_type": 1 00:20:59.087 }, 00:20:59.087 { 00:20:59.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.087 "dma_device_type": 2 00:20:59.087 } 00:20:59.087 ], 00:20:59.087 "driver_specific": {} 00:20:59.087 }' 00:20:59.087 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.087 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.087 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:59.087 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.347 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.347 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:59.347 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.347 06:16:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:59.347 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:59.607 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:59.607 "name": "BaseBdev3", 00:20:59.607 "aliases": [ 00:20:59.607 "7940b15a-846e-4dfe-b780-ffd27169e5d0" 00:20:59.607 ], 00:20:59.607 "product_name": "Malloc disk", 00:20:59.607 "block_size": 512, 00:20:59.607 "num_blocks": 65536, 00:20:59.607 "uuid": "7940b15a-846e-4dfe-b780-ffd27169e5d0", 00:20:59.607 "assigned_rate_limits": { 00:20:59.607 "rw_ios_per_sec": 0, 00:20:59.607 "rw_mbytes_per_sec": 0, 00:20:59.607 "r_mbytes_per_sec": 0, 00:20:59.607 "w_mbytes_per_sec": 0 00:20:59.607 }, 00:20:59.607 "claimed": true, 00:20:59.607 "claim_type": "exclusive_write", 00:20:59.607 "zoned": false, 00:20:59.607 "supported_io_types": { 00:20:59.607 "read": true, 00:20:59.607 "write": true, 00:20:59.607 "unmap": true, 00:20:59.607 "flush": true, 00:20:59.607 "reset": true, 00:20:59.607 "nvme_admin": false, 00:20:59.607 "nvme_io": false, 00:20:59.607 "nvme_io_md": false, 00:20:59.607 "write_zeroes": true, 00:20:59.607 "zcopy": true, 00:20:59.607 "get_zone_info": false, 00:20:59.607 "zone_management": false, 00:20:59.607 "zone_append": false, 00:20:59.607 "compare": false, 00:20:59.607 "compare_and_write": false, 00:20:59.607 "abort": true, 00:20:59.607 "seek_hole": false, 00:20:59.607 "seek_data": false, 00:20:59.607 "copy": true, 00:20:59.607 "nvme_iov_md": false 00:20:59.607 }, 00:20:59.607 "memory_domains": [ 00:20:59.607 { 00:20:59.607 "dma_device_id": "system", 00:20:59.607 "dma_device_type": 1 00:20:59.607 }, 00:20:59.607 { 00:20:59.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.607 "dma_device_type": 2 00:20:59.607 } 00:20:59.607 ], 00:20:59.607 "driver_specific": {} 00:20:59.607 }' 00:20:59.607 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.607 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.607 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:59.607 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.867 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:00.127 [2024-08-13 06:16:01.835611] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.127 [2024-08-13 06:16:01.835690] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.127 [2024-08-13 06:16:01.835830] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.127 [2024-08-13 06:16:01.836154] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.127 [2024-08-13 06:16:01.836212] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 98129 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 98129 ']' 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 98129 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98129 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:00.127 killing process with pid 98129 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98129' 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 98129 00:21:00.127 [2024-08-13 06:16:01.898535] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.127 06:16:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 98129 00:21:00.386 [2024-08-13 06:16:01.957283] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.647 ************************************ 00:21:00.647 END TEST raid5f_state_function_test_sb 00:21:00.647 ************************************ 00:21:00.647 06:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:00.647 00:21:00.647 real 0m24.918s 00:21:00.647 user 0m45.525s 00:21:00.647 sys 0m4.343s 00:21:00.647 06:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:00.647 06:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.647 06:16:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:21:00.647 06:16:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:00.647 06:16:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:00.647 06:16:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.647 ************************************ 00:21:00.647 START TEST raid5f_superblock_test 00:21:00.647 ************************************ 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 3 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=99016 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 99016 /var/tmp/spdk-raid.sock 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 99016 ']' 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:00.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.647 06:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.907 [2024-08-13 06:16:02.502999] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:21:00.907 [2024-08-13 06:16:02.503131] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99016 ] 00:21:00.907 [2024-08-13 06:16:02.648905] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.907 [2024-08-13 06:16:02.693205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.167 [2024-08-13 06:16:02.736334] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.167 [2024-08-13 06:16:02.736376] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.736 06:16:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.737 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:01.737 malloc1 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:01.997 [2024-08-13 06:16:03.712209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:01.997 [2024-08-13 06:16:03.712336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.997 [2024-08-13 06:16:03.712381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:01.997 [2024-08-13 06:16:03.712413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.997 [2024-08-13 06:16:03.714580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.997 [2024-08-13 06:16:03.714655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:01.997 pt1 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.997 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:02.256 malloc2 00:21:02.256 06:16:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.607 [2024-08-13 06:16:04.120064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.607 [2024-08-13 06:16:04.120206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.607 [2024-08-13 06:16:04.120244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:02.607 [2024-08-13 06:16:04.120271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.607 [2024-08-13 06:16:04.122400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.607 [2024-08-13 06:16:04.122475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.607 pt2 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:02.607 malloc3 00:21:02.607 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:02.867 [2024-08-13 06:16:04.503466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:02.867 [2024-08-13 06:16:04.503581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.867 [2024-08-13 06:16:04.503619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:02.867 [2024-08-13 06:16:04.503646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.867 [2024-08-13 06:16:04.505636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.867 [2024-08-13 06:16:04.505704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:02.867 pt3 00:21:02.867 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:02.867 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:02.867 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:03.127 [2024-08-13 06:16:04.687228] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.127 [2024-08-13 06:16:04.689132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:03.127 [2024-08-13 06:16:04.689233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:03.127 [2024-08-13 06:16:04.689413] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:21:03.127 [2024-08-13 06:16:04.689481] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:03.127 [2024-08-13 06:16:04.689774] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:03.127 [2024-08-13 06:16:04.690261] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:21:03.127 [2024-08-13 06:16:04.690308] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:21:03.127 [2024-08-13 06:16:04.690503] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.127 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:03.127 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.127 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.127 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.128 "name": "raid_bdev1", 00:21:03.128 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:03.128 "strip_size_kb": 64, 00:21:03.128 "state": "online", 00:21:03.128 "raid_level": "raid5f", 00:21:03.128 "superblock": true, 00:21:03.128 "num_base_bdevs": 3, 00:21:03.128 "num_base_bdevs_discovered": 3, 00:21:03.128 "num_base_bdevs_operational": 3, 00:21:03.128 "base_bdevs_list": [ 00:21:03.128 { 00:21:03.128 "name": "pt1", 00:21:03.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.128 "is_configured": true, 00:21:03.128 "data_offset": 2048, 00:21:03.128 "data_size": 63488 00:21:03.128 }, 00:21:03.128 { 00:21:03.128 "name": "pt2", 00:21:03.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.128 "is_configured": true, 00:21:03.128 "data_offset": 2048, 00:21:03.128 "data_size": 63488 00:21:03.128 }, 00:21:03.128 { 00:21:03.128 "name": "pt3", 00:21:03.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.128 "is_configured": true, 00:21:03.128 "data_offset": 2048, 00:21:03.128 "data_size": 63488 00:21:03.128 } 00:21:03.128 ] 00:21:03.128 }' 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.128 06:16:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:03.697 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:03.957 [2024-08-13 06:16:05.658428] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.957 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:03.957 "name": "raid_bdev1", 00:21:03.957 "aliases": [ 00:21:03.957 "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d" 00:21:03.957 ], 00:21:03.957 "product_name": "Raid Volume", 00:21:03.957 "block_size": 512, 00:21:03.957 "num_blocks": 126976, 00:21:03.957 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:03.957 "assigned_rate_limits": { 00:21:03.957 "rw_ios_per_sec": 0, 00:21:03.957 "rw_mbytes_per_sec": 0, 00:21:03.957 "r_mbytes_per_sec": 0, 00:21:03.957 "w_mbytes_per_sec": 0 00:21:03.957 }, 00:21:03.957 "claimed": false, 00:21:03.957 "zoned": false, 00:21:03.957 "supported_io_types": { 00:21:03.957 "read": true, 00:21:03.957 "write": true, 00:21:03.957 "unmap": false, 00:21:03.957 "flush": false, 00:21:03.957 "reset": true, 00:21:03.957 "nvme_admin": false, 00:21:03.957 "nvme_io": false, 00:21:03.957 "nvme_io_md": false, 00:21:03.957 "write_zeroes": true, 00:21:03.957 "zcopy": false, 00:21:03.957 "get_zone_info": false, 00:21:03.957 "zone_management": false, 00:21:03.957 "zone_append": false, 00:21:03.957 "compare": false, 00:21:03.957 "compare_and_write": false, 00:21:03.957 "abort": false, 00:21:03.957 "seek_hole": false, 00:21:03.957 "seek_data": false, 00:21:03.957 "copy": false, 00:21:03.957 "nvme_iov_md": false 00:21:03.957 }, 00:21:03.957 "driver_specific": { 00:21:03.957 "raid": { 00:21:03.957 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:03.957 "strip_size_kb": 64, 00:21:03.957 "state": "online", 00:21:03.957 "raid_level": "raid5f", 00:21:03.957 "superblock": true, 00:21:03.957 "num_base_bdevs": 3, 00:21:03.957 "num_base_bdevs_discovered": 3, 00:21:03.957 "num_base_bdevs_operational": 3, 00:21:03.957 "base_bdevs_list": [ 00:21:03.957 { 00:21:03.957 "name": "pt1", 00:21:03.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.957 "is_configured": true, 00:21:03.957 "data_offset": 2048, 00:21:03.957 "data_size": 63488 00:21:03.957 }, 00:21:03.957 { 00:21:03.957 "name": "pt2", 00:21:03.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.957 "is_configured": true, 00:21:03.957 "data_offset": 2048, 00:21:03.957 "data_size": 63488 00:21:03.957 }, 00:21:03.957 { 00:21:03.957 "name": "pt3", 00:21:03.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.957 "is_configured": true, 00:21:03.957 "data_offset": 2048, 00:21:03.957 "data_size": 63488 00:21:03.957 } 00:21:03.957 ] 00:21:03.957 } 00:21:03.957 } 00:21:03.957 }' 00:21:03.957 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.957 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:03.957 pt2 00:21:03.957 pt3' 00:21:03.957 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.957 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:03.957 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.217 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.217 "name": "pt1", 00:21:04.217 "aliases": [ 00:21:04.217 "00000000-0000-0000-0000-000000000001" 00:21:04.217 ], 00:21:04.217 "product_name": "passthru", 00:21:04.217 "block_size": 512, 00:21:04.217 "num_blocks": 65536, 00:21:04.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.217 "assigned_rate_limits": { 00:21:04.217 "rw_ios_per_sec": 0, 00:21:04.217 "rw_mbytes_per_sec": 0, 00:21:04.217 "r_mbytes_per_sec": 0, 00:21:04.217 "w_mbytes_per_sec": 0 00:21:04.217 }, 00:21:04.217 "claimed": true, 00:21:04.217 "claim_type": "exclusive_write", 00:21:04.217 "zoned": false, 00:21:04.217 "supported_io_types": { 00:21:04.217 "read": true, 00:21:04.217 "write": true, 00:21:04.217 "unmap": true, 00:21:04.217 "flush": true, 00:21:04.217 "reset": true, 00:21:04.217 "nvme_admin": false, 00:21:04.217 "nvme_io": false, 00:21:04.217 "nvme_io_md": false, 00:21:04.217 "write_zeroes": true, 00:21:04.217 "zcopy": true, 00:21:04.217 "get_zone_info": false, 00:21:04.217 "zone_management": false, 00:21:04.217 "zone_append": false, 00:21:04.217 "compare": false, 00:21:04.217 "compare_and_write": false, 00:21:04.217 "abort": true, 00:21:04.217 "seek_hole": false, 00:21:04.217 "seek_data": false, 00:21:04.217 "copy": true, 00:21:04.217 "nvme_iov_md": false 00:21:04.217 }, 00:21:04.217 "memory_domains": [ 00:21:04.217 { 00:21:04.217 "dma_device_id": "system", 00:21:04.217 "dma_device_type": 1 00:21:04.217 }, 00:21:04.217 { 00:21:04.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.217 "dma_device_type": 2 00:21:04.217 } 00:21:04.217 ], 00:21:04.217 "driver_specific": { 00:21:04.217 "passthru": { 00:21:04.217 "name": "pt1", 00:21:04.217 "base_bdev_name": "malloc1" 00:21:04.217 } 00:21:04.217 } 00:21:04.217 }' 00:21:04.217 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.217 06:16:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.477 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.736 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.736 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.736 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:04.736 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.737 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.737 "name": "pt2", 00:21:04.737 "aliases": [ 00:21:04.737 "00000000-0000-0000-0000-000000000002" 00:21:04.737 ], 00:21:04.737 "product_name": "passthru", 00:21:04.737 "block_size": 512, 00:21:04.737 "num_blocks": 65536, 00:21:04.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.737 "assigned_rate_limits": { 00:21:04.737 "rw_ios_per_sec": 0, 00:21:04.737 "rw_mbytes_per_sec": 0, 00:21:04.737 "r_mbytes_per_sec": 0, 00:21:04.737 "w_mbytes_per_sec": 0 00:21:04.737 }, 00:21:04.737 "claimed": true, 00:21:04.737 "claim_type": "exclusive_write", 00:21:04.737 "zoned": false, 00:21:04.737 "supported_io_types": { 00:21:04.737 "read": true, 00:21:04.737 "write": true, 00:21:04.737 "unmap": true, 00:21:04.737 "flush": true, 00:21:04.737 "reset": true, 00:21:04.737 "nvme_admin": false, 00:21:04.737 "nvme_io": false, 00:21:04.737 "nvme_io_md": false, 00:21:04.737 "write_zeroes": true, 00:21:04.737 "zcopy": true, 00:21:04.737 "get_zone_info": false, 00:21:04.737 "zone_management": false, 00:21:04.737 "zone_append": false, 00:21:04.737 "compare": false, 00:21:04.737 "compare_and_write": false, 00:21:04.737 "abort": true, 00:21:04.737 "seek_hole": false, 00:21:04.737 "seek_data": false, 00:21:04.737 "copy": true, 00:21:04.737 "nvme_iov_md": false 00:21:04.737 }, 00:21:04.737 "memory_domains": [ 00:21:04.737 { 00:21:04.737 "dma_device_id": "system", 00:21:04.737 "dma_device_type": 1 00:21:04.737 }, 00:21:04.737 { 00:21:04.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.737 "dma_device_type": 2 00:21:04.737 } 00:21:04.737 ], 00:21:04.737 "driver_specific": { 00:21:04.737 "passthru": { 00:21:04.737 "name": "pt2", 00:21:04.737 "base_bdev_name": "malloc2" 00:21:04.737 } 00:21:04.737 } 00:21:04.737 }' 00:21:04.737 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.737 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.996 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.255 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.256 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.256 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.256 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:05.256 06:16:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.515 "name": "pt3", 00:21:05.515 "aliases": [ 00:21:05.515 "00000000-0000-0000-0000-000000000003" 00:21:05.515 ], 00:21:05.515 "product_name": "passthru", 00:21:05.515 "block_size": 512, 00:21:05.515 "num_blocks": 65536, 00:21:05.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:05.515 "assigned_rate_limits": { 00:21:05.515 "rw_ios_per_sec": 0, 00:21:05.515 "rw_mbytes_per_sec": 0, 00:21:05.515 "r_mbytes_per_sec": 0, 00:21:05.515 "w_mbytes_per_sec": 0 00:21:05.515 }, 00:21:05.515 "claimed": true, 00:21:05.515 "claim_type": "exclusive_write", 00:21:05.515 "zoned": false, 00:21:05.515 "supported_io_types": { 00:21:05.515 "read": true, 00:21:05.515 "write": true, 00:21:05.515 "unmap": true, 00:21:05.515 "flush": true, 00:21:05.515 "reset": true, 00:21:05.515 "nvme_admin": false, 00:21:05.515 "nvme_io": false, 00:21:05.515 "nvme_io_md": false, 00:21:05.515 "write_zeroes": true, 00:21:05.515 "zcopy": true, 00:21:05.515 "get_zone_info": false, 00:21:05.515 "zone_management": false, 00:21:05.515 "zone_append": false, 00:21:05.515 "compare": false, 00:21:05.515 "compare_and_write": false, 00:21:05.515 "abort": true, 00:21:05.515 "seek_hole": false, 00:21:05.515 "seek_data": false, 00:21:05.515 "copy": true, 00:21:05.515 "nvme_iov_md": false 00:21:05.515 }, 00:21:05.515 "memory_domains": [ 00:21:05.515 { 00:21:05.515 "dma_device_id": "system", 00:21:05.515 "dma_device_type": 1 00:21:05.515 }, 00:21:05.515 { 00:21:05.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.515 "dma_device_type": 2 00:21:05.515 } 00:21:05.515 ], 00:21:05.515 "driver_specific": { 00:21:05.515 "passthru": { 00:21:05.515 "name": "pt3", 00:21:05.515 "base_bdev_name": "malloc3" 00:21:05.515 } 00:21:05.515 } 00:21:05.515 }' 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.515 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.775 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.775 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.775 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.775 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.775 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:05.775 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:21:06.034 [2024-08-13 06:16:07.611156] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.034 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=f47b2844-bfdc-4c0e-a74d-4316ace4ac1d 00:21:06.034 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z f47b2844-bfdc-4c0e-a74d-4316ace4ac1d ']' 00:21:06.034 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.034 [2024-08-13 06:16:07.794638] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.034 [2024-08-13 06:16:07.794665] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.034 [2024-08-13 06:16:07.794746] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.034 [2024-08-13 06:16:07.794821] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.034 [2024-08-13 06:16:07.794840] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:21:06.035 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.035 06:16:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:21:06.294 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:21:06.294 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:21:06.294 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.294 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:06.554 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.554 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:06.814 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.814 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:07.073 06:16:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:07.333 [2024-08-13 06:16:08.972824] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:07.333 [2024-08-13 06:16:08.974479] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:07.333 [2024-08-13 06:16:08.974561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:07.333 [2024-08-13 06:16:08.974621] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:07.333 [2024-08-13 06:16:08.974706] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:07.333 [2024-08-13 06:16:08.974776] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:07.333 [2024-08-13 06:16:08.974825] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.333 [2024-08-13 06:16:08.974893] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:21:07.333 request: 00:21:07.333 { 00:21:07.333 "name": "raid_bdev1", 00:21:07.333 "raid_level": "raid5f", 00:21:07.333 "base_bdevs": [ 00:21:07.333 "malloc1", 00:21:07.333 "malloc2", 00:21:07.333 "malloc3" 00:21:07.333 ], 00:21:07.333 "strip_size_kb": 64, 00:21:07.333 "superblock": false, 00:21:07.333 "method": "bdev_raid_create", 00:21:07.333 "req_id": 1 00:21:07.333 } 00:21:07.333 Got JSON-RPC error response 00:21:07.333 response: 00:21:07.333 { 00:21:07.333 "code": -17, 00:21:07.333 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:07.333 } 00:21:07.333 06:16:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:21:07.333 06:16:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:21:07.333 06:16:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:21:07.333 06:16:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:21:07.333 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.333 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:21:07.592 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:21:07.592 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:21:07.592 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:07.592 [2024-08-13 06:16:09.380049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:07.592 [2024-08-13 06:16:09.380105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.592 [2024-08-13 06:16:09.380121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:07.592 [2024-08-13 06:16:09.380130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.592 [2024-08-13 06:16:09.382149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.592 [2024-08-13 06:16:09.382183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:07.592 [2024-08-13 06:16:09.382243] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:07.592 [2024-08-13 06:16:09.382283] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:07.852 pt1 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.852 "name": "raid_bdev1", 00:21:07.852 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:07.852 "strip_size_kb": 64, 00:21:07.852 "state": "configuring", 00:21:07.852 "raid_level": "raid5f", 00:21:07.852 "superblock": true, 00:21:07.852 "num_base_bdevs": 3, 00:21:07.852 "num_base_bdevs_discovered": 1, 00:21:07.852 "num_base_bdevs_operational": 3, 00:21:07.852 "base_bdevs_list": [ 00:21:07.852 { 00:21:07.852 "name": "pt1", 00:21:07.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:07.852 "is_configured": true, 00:21:07.852 "data_offset": 2048, 00:21:07.852 "data_size": 63488 00:21:07.852 }, 00:21:07.852 { 00:21:07.852 "name": null, 00:21:07.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.852 "is_configured": false, 00:21:07.852 "data_offset": 2048, 00:21:07.852 "data_size": 63488 00:21:07.852 }, 00:21:07.852 { 00:21:07.852 "name": null, 00:21:07.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.852 "is_configured": false, 00:21:07.852 "data_offset": 2048, 00:21:07.852 "data_size": 63488 00:21:07.852 } 00:21:07.852 ] 00:21:07.852 }' 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.852 06:16:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.422 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:21:08.422 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:08.682 [2024-08-13 06:16:10.330454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:08.682 [2024-08-13 06:16:10.330599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.682 [2024-08-13 06:16:10.330640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:08.682 [2024-08-13 06:16:10.330667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.682 [2024-08-13 06:16:10.331086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.682 [2024-08-13 06:16:10.331142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:08.682 [2024-08-13 06:16:10.331249] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:08.682 [2024-08-13 06:16:10.331298] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.682 pt2 00:21:08.682 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:08.942 [2024-08-13 06:16:10.522192] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.942 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.201 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.201 "name": "raid_bdev1", 00:21:09.201 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:09.201 "strip_size_kb": 64, 00:21:09.201 "state": "configuring", 00:21:09.201 "raid_level": "raid5f", 00:21:09.201 "superblock": true, 00:21:09.201 "num_base_bdevs": 3, 00:21:09.201 "num_base_bdevs_discovered": 1, 00:21:09.201 "num_base_bdevs_operational": 3, 00:21:09.201 "base_bdevs_list": [ 00:21:09.201 { 00:21:09.201 "name": "pt1", 00:21:09.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:09.201 "is_configured": true, 00:21:09.202 "data_offset": 2048, 00:21:09.202 "data_size": 63488 00:21:09.202 }, 00:21:09.202 { 00:21:09.202 "name": null, 00:21:09.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.202 "is_configured": false, 00:21:09.202 "data_offset": 2048, 00:21:09.202 "data_size": 63488 00:21:09.202 }, 00:21:09.202 { 00:21:09.202 "name": null, 00:21:09.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:09.202 "is_configured": false, 00:21:09.202 "data_offset": 2048, 00:21:09.202 "data_size": 63488 00:21:09.202 } 00:21:09.202 ] 00:21:09.202 }' 00:21:09.202 06:16:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.202 06:16:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.461 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:21:09.461 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:09.461 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.721 [2024-08-13 06:16:11.404697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.721 [2024-08-13 06:16:11.404749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.721 [2024-08-13 06:16:11.404764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:09.721 [2024-08-13 06:16:11.404774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.721 [2024-08-13 06:16:11.405135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.721 [2024-08-13 06:16:11.405183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.721 [2024-08-13 06:16:11.405241] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:09.721 [2024-08-13 06:16:11.405263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.721 pt2 00:21:09.721 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:09.721 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:09.721 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:09.981 [2024-08-13 06:16:11.592398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:09.981 [2024-08-13 06:16:11.592466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.981 [2024-08-13 06:16:11.592484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:09.981 [2024-08-13 06:16:11.592497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.981 [2024-08-13 06:16:11.592911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.981 [2024-08-13 06:16:11.592934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:09.981 [2024-08-13 06:16:11.593002] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:09.981 [2024-08-13 06:16:11.593053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:09.981 [2024-08-13 06:16:11.593163] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:21:09.981 [2024-08-13 06:16:11.593180] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:09.981 [2024-08-13 06:16:11.593390] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:21:09.981 [2024-08-13 06:16:11.593757] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:21:09.981 [2024-08-13 06:16:11.593775] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:21:09.981 [2024-08-13 06:16:11.593874] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.981 pt3 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.981 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.241 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.241 "name": "raid_bdev1", 00:21:10.241 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:10.241 "strip_size_kb": 64, 00:21:10.241 "state": "online", 00:21:10.241 "raid_level": "raid5f", 00:21:10.241 "superblock": true, 00:21:10.241 "num_base_bdevs": 3, 00:21:10.241 "num_base_bdevs_discovered": 3, 00:21:10.241 "num_base_bdevs_operational": 3, 00:21:10.241 "base_bdevs_list": [ 00:21:10.241 { 00:21:10.241 "name": "pt1", 00:21:10.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.241 "is_configured": true, 00:21:10.241 "data_offset": 2048, 00:21:10.241 "data_size": 63488 00:21:10.241 }, 00:21:10.241 { 00:21:10.241 "name": "pt2", 00:21:10.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.241 "is_configured": true, 00:21:10.241 "data_offset": 2048, 00:21:10.241 "data_size": 63488 00:21:10.241 }, 00:21:10.241 { 00:21:10.241 "name": "pt3", 00:21:10.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.241 "is_configured": true, 00:21:10.241 "data_offset": 2048, 00:21:10.241 "data_size": 63488 00:21:10.241 } 00:21:10.241 ] 00:21:10.241 }' 00:21:10.241 06:16:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.241 06:16:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:10.810 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.810 [2024-08-13 06:16:12.582876] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:11.071 "name": "raid_bdev1", 00:21:11.071 "aliases": [ 00:21:11.071 "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d" 00:21:11.071 ], 00:21:11.071 "product_name": "Raid Volume", 00:21:11.071 "block_size": 512, 00:21:11.071 "num_blocks": 126976, 00:21:11.071 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:11.071 "assigned_rate_limits": { 00:21:11.071 "rw_ios_per_sec": 0, 00:21:11.071 "rw_mbytes_per_sec": 0, 00:21:11.071 "r_mbytes_per_sec": 0, 00:21:11.071 "w_mbytes_per_sec": 0 00:21:11.071 }, 00:21:11.071 "claimed": false, 00:21:11.071 "zoned": false, 00:21:11.071 "supported_io_types": { 00:21:11.071 "read": true, 00:21:11.071 "write": true, 00:21:11.071 "unmap": false, 00:21:11.071 "flush": false, 00:21:11.071 "reset": true, 00:21:11.071 "nvme_admin": false, 00:21:11.071 "nvme_io": false, 00:21:11.071 "nvme_io_md": false, 00:21:11.071 "write_zeroes": true, 00:21:11.071 "zcopy": false, 00:21:11.071 "get_zone_info": false, 00:21:11.071 "zone_management": false, 00:21:11.071 "zone_append": false, 00:21:11.071 "compare": false, 00:21:11.071 "compare_and_write": false, 00:21:11.071 "abort": false, 00:21:11.071 "seek_hole": false, 00:21:11.071 "seek_data": false, 00:21:11.071 "copy": false, 00:21:11.071 "nvme_iov_md": false 00:21:11.071 }, 00:21:11.071 "driver_specific": { 00:21:11.071 "raid": { 00:21:11.071 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:11.071 "strip_size_kb": 64, 00:21:11.071 "state": "online", 00:21:11.071 "raid_level": "raid5f", 00:21:11.071 "superblock": true, 00:21:11.071 "num_base_bdevs": 3, 00:21:11.071 "num_base_bdevs_discovered": 3, 00:21:11.071 "num_base_bdevs_operational": 3, 00:21:11.071 "base_bdevs_list": [ 00:21:11.071 { 00:21:11.071 "name": "pt1", 00:21:11.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.071 "is_configured": true, 00:21:11.071 "data_offset": 2048, 00:21:11.071 "data_size": 63488 00:21:11.071 }, 00:21:11.071 { 00:21:11.071 "name": "pt2", 00:21:11.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.071 "is_configured": true, 00:21:11.071 "data_offset": 2048, 00:21:11.071 "data_size": 63488 00:21:11.071 }, 00:21:11.071 { 00:21:11.071 "name": "pt3", 00:21:11.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:11.071 "is_configured": true, 00:21:11.071 "data_offset": 2048, 00:21:11.071 "data_size": 63488 00:21:11.071 } 00:21:11.071 ] 00:21:11.071 } 00:21:11.071 } 00:21:11.071 }' 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:11.071 pt2 00:21:11.071 pt3' 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.071 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.071 "name": "pt1", 00:21:11.071 "aliases": [ 00:21:11.071 "00000000-0000-0000-0000-000000000001" 00:21:11.071 ], 00:21:11.071 "product_name": "passthru", 00:21:11.071 "block_size": 512, 00:21:11.071 "num_blocks": 65536, 00:21:11.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.071 "assigned_rate_limits": { 00:21:11.071 "rw_ios_per_sec": 0, 00:21:11.071 "rw_mbytes_per_sec": 0, 00:21:11.071 "r_mbytes_per_sec": 0, 00:21:11.071 "w_mbytes_per_sec": 0 00:21:11.071 }, 00:21:11.071 "claimed": true, 00:21:11.071 "claim_type": "exclusive_write", 00:21:11.071 "zoned": false, 00:21:11.071 "supported_io_types": { 00:21:11.071 "read": true, 00:21:11.071 "write": true, 00:21:11.071 "unmap": true, 00:21:11.071 "flush": true, 00:21:11.071 "reset": true, 00:21:11.071 "nvme_admin": false, 00:21:11.071 "nvme_io": false, 00:21:11.071 "nvme_io_md": false, 00:21:11.071 "write_zeroes": true, 00:21:11.071 "zcopy": true, 00:21:11.071 "get_zone_info": false, 00:21:11.071 "zone_management": false, 00:21:11.071 "zone_append": false, 00:21:11.071 "compare": false, 00:21:11.071 "compare_and_write": false, 00:21:11.071 "abort": true, 00:21:11.071 "seek_hole": false, 00:21:11.071 "seek_data": false, 00:21:11.071 "copy": true, 00:21:11.071 "nvme_iov_md": false 00:21:11.071 }, 00:21:11.071 "memory_domains": [ 00:21:11.071 { 00:21:11.071 "dma_device_id": "system", 00:21:11.071 "dma_device_type": 1 00:21:11.071 }, 00:21:11.071 { 00:21:11.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.071 "dma_device_type": 2 00:21:11.071 } 00:21:11.071 ], 00:21:11.071 "driver_specific": { 00:21:11.071 "passthru": { 00:21:11.071 "name": "pt1", 00:21:11.071 "base_bdev_name": "malloc1" 00:21:11.071 } 00:21:11.071 } 00:21:11.071 }' 00:21:11.331 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.331 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.331 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.331 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.331 06:16:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.331 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.331 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.331 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.331 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.331 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.591 "name": "pt2", 00:21:11.591 "aliases": [ 00:21:11.591 "00000000-0000-0000-0000-000000000002" 00:21:11.591 ], 00:21:11.591 "product_name": "passthru", 00:21:11.591 "block_size": 512, 00:21:11.591 "num_blocks": 65536, 00:21:11.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.591 "assigned_rate_limits": { 00:21:11.591 "rw_ios_per_sec": 0, 00:21:11.591 "rw_mbytes_per_sec": 0, 00:21:11.591 "r_mbytes_per_sec": 0, 00:21:11.591 "w_mbytes_per_sec": 0 00:21:11.591 }, 00:21:11.591 "claimed": true, 00:21:11.591 "claim_type": "exclusive_write", 00:21:11.591 "zoned": false, 00:21:11.591 "supported_io_types": { 00:21:11.591 "read": true, 00:21:11.591 "write": true, 00:21:11.591 "unmap": true, 00:21:11.591 "flush": true, 00:21:11.591 "reset": true, 00:21:11.591 "nvme_admin": false, 00:21:11.591 "nvme_io": false, 00:21:11.591 "nvme_io_md": false, 00:21:11.591 "write_zeroes": true, 00:21:11.591 "zcopy": true, 00:21:11.591 "get_zone_info": false, 00:21:11.591 "zone_management": false, 00:21:11.591 "zone_append": false, 00:21:11.591 "compare": false, 00:21:11.591 "compare_and_write": false, 00:21:11.591 "abort": true, 00:21:11.591 "seek_hole": false, 00:21:11.591 "seek_data": false, 00:21:11.591 "copy": true, 00:21:11.591 "nvme_iov_md": false 00:21:11.591 }, 00:21:11.591 "memory_domains": [ 00:21:11.591 { 00:21:11.591 "dma_device_id": "system", 00:21:11.591 "dma_device_type": 1 00:21:11.591 }, 00:21:11.591 { 00:21:11.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.591 "dma_device_type": 2 00:21:11.591 } 00:21:11.591 ], 00:21:11.591 "driver_specific": { 00:21:11.591 "passthru": { 00:21:11.591 "name": "pt2", 00:21:11.591 "base_bdev_name": "malloc2" 00:21:11.591 } 00:21:11.591 } 00:21:11.591 }' 00:21:11.591 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.851 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.113 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.113 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.113 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:12.113 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.113 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.113 "name": "pt3", 00:21:12.113 "aliases": [ 00:21:12.113 "00000000-0000-0000-0000-000000000003" 00:21:12.113 ], 00:21:12.113 "product_name": "passthru", 00:21:12.113 "block_size": 512, 00:21:12.113 "num_blocks": 65536, 00:21:12.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:12.113 "assigned_rate_limits": { 00:21:12.113 "rw_ios_per_sec": 0, 00:21:12.113 "rw_mbytes_per_sec": 0, 00:21:12.113 "r_mbytes_per_sec": 0, 00:21:12.113 "w_mbytes_per_sec": 0 00:21:12.113 }, 00:21:12.113 "claimed": true, 00:21:12.113 "claim_type": "exclusive_write", 00:21:12.113 "zoned": false, 00:21:12.113 "supported_io_types": { 00:21:12.113 "read": true, 00:21:12.113 "write": true, 00:21:12.113 "unmap": true, 00:21:12.113 "flush": true, 00:21:12.113 "reset": true, 00:21:12.113 "nvme_admin": false, 00:21:12.113 "nvme_io": false, 00:21:12.113 "nvme_io_md": false, 00:21:12.113 "write_zeroes": true, 00:21:12.113 "zcopy": true, 00:21:12.113 "get_zone_info": false, 00:21:12.113 "zone_management": false, 00:21:12.113 "zone_append": false, 00:21:12.113 "compare": false, 00:21:12.113 "compare_and_write": false, 00:21:12.113 "abort": true, 00:21:12.113 "seek_hole": false, 00:21:12.113 "seek_data": false, 00:21:12.113 "copy": true, 00:21:12.113 "nvme_iov_md": false 00:21:12.113 }, 00:21:12.113 "memory_domains": [ 00:21:12.113 { 00:21:12.113 "dma_device_id": "system", 00:21:12.113 "dma_device_type": 1 00:21:12.113 }, 00:21:12.113 { 00:21:12.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.113 "dma_device_type": 2 00:21:12.113 } 00:21:12.113 ], 00:21:12.113 "driver_specific": { 00:21:12.113 "passthru": { 00:21:12.113 "name": "pt3", 00:21:12.113 "base_bdev_name": "malloc3" 00:21:12.113 } 00:21:12.113 } 00:21:12.113 }' 00:21:12.113 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.374 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.374 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.374 06:16:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.374 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.374 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.374 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.374 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.374 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.374 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.633 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.633 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.633 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:12.633 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:21:12.892 [2024-08-13 06:16:14.447793] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.892 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' f47b2844-bfdc-4c0e-a74d-4316ace4ac1d '!=' f47b2844-bfdc-4c0e-a74d-4316ace4ac1d ']' 00:21:12.892 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:21:12.892 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:12.892 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:12.892 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:12.892 [2024-08-13 06:16:14.659276] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.151 "name": "raid_bdev1", 00:21:13.151 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:13.151 "strip_size_kb": 64, 00:21:13.151 "state": "online", 00:21:13.151 "raid_level": "raid5f", 00:21:13.151 "superblock": true, 00:21:13.151 "num_base_bdevs": 3, 00:21:13.151 "num_base_bdevs_discovered": 2, 00:21:13.151 "num_base_bdevs_operational": 2, 00:21:13.151 "base_bdevs_list": [ 00:21:13.151 { 00:21:13.151 "name": null, 00:21:13.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.151 "is_configured": false, 00:21:13.151 "data_offset": 2048, 00:21:13.151 "data_size": 63488 00:21:13.151 }, 00:21:13.151 { 00:21:13.151 "name": "pt2", 00:21:13.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:13.151 "is_configured": true, 00:21:13.151 "data_offset": 2048, 00:21:13.151 "data_size": 63488 00:21:13.151 }, 00:21:13.151 { 00:21:13.151 "name": "pt3", 00:21:13.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.151 "is_configured": true, 00:21:13.151 "data_offset": 2048, 00:21:13.151 "data_size": 63488 00:21:13.151 } 00:21:13.151 ] 00:21:13.151 }' 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.151 06:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.721 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:13.980 [2024-08-13 06:16:15.609738] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:13.980 [2024-08-13 06:16:15.609832] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.980 [2024-08-13 06:16:15.609912] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.980 [2024-08-13 06:16:15.609969] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.980 [2024-08-13 06:16:15.609979] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:21:13.980 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.980 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:21:14.240 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:21:14.240 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:21:14.240 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:14.240 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:14.240 06:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:14.240 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.240 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:14.240 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:14.500 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.500 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:14.500 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:21:14.500 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:14.500 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:14.760 [2024-08-13 06:16:16.332465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:14.760 [2024-08-13 06:16:16.332534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.760 [2024-08-13 06:16:16.332552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:14.760 [2024-08-13 06:16:16.332562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.760 [2024-08-13 06:16:16.334780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.760 [2024-08-13 06:16:16.334825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:14.760 [2024-08-13 06:16:16.334902] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:14.760 [2024-08-13 06:16:16.334956] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:14.760 pt2 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.760 "name": "raid_bdev1", 00:21:14.760 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:14.760 "strip_size_kb": 64, 00:21:14.760 "state": "configuring", 00:21:14.760 "raid_level": "raid5f", 00:21:14.760 "superblock": true, 00:21:14.760 "num_base_bdevs": 3, 00:21:14.760 "num_base_bdevs_discovered": 1, 00:21:14.760 "num_base_bdevs_operational": 2, 00:21:14.760 "base_bdevs_list": [ 00:21:14.760 { 00:21:14.760 "name": null, 00:21:14.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.760 "is_configured": false, 00:21:14.760 "data_offset": 2048, 00:21:14.760 "data_size": 63488 00:21:14.760 }, 00:21:14.760 { 00:21:14.760 "name": "pt2", 00:21:14.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.760 "is_configured": true, 00:21:14.760 "data_offset": 2048, 00:21:14.760 "data_size": 63488 00:21:14.760 }, 00:21:14.760 { 00:21:14.760 "name": null, 00:21:14.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:14.760 "is_configured": false, 00:21:14.760 "data_offset": 2048, 00:21:14.760 "data_size": 63488 00:21:14.760 } 00:21:14.760 ] 00:21:14.760 }' 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.760 06:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.329 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:21:15.329 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:15.589 [2024-08-13 06:16:17.298815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:15.589 [2024-08-13 06:16:17.298962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.589 [2024-08-13 06:16:17.298997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:15.589 [2024-08-13 06:16:17.299026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.589 [2024-08-13 06:16:17.299422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.589 [2024-08-13 06:16:17.299483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:15.589 [2024-08-13 06:16:17.299585] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:15.589 [2024-08-13 06:16:17.299636] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:15.589 [2024-08-13 06:16:17.299761] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:21:15.589 [2024-08-13 06:16:17.299801] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:15.589 [2024-08-13 06:16:17.300022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:15.589 [2024-08-13 06:16:17.300517] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:21:15.589 [2024-08-13 06:16:17.300567] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:21:15.589 [2024-08-13 06:16:17.300817] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.589 pt3 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.589 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.849 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.849 "name": "raid_bdev1", 00:21:15.849 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:15.849 "strip_size_kb": 64, 00:21:15.849 "state": "online", 00:21:15.849 "raid_level": "raid5f", 00:21:15.849 "superblock": true, 00:21:15.849 "num_base_bdevs": 3, 00:21:15.849 "num_base_bdevs_discovered": 2, 00:21:15.849 "num_base_bdevs_operational": 2, 00:21:15.849 "base_bdevs_list": [ 00:21:15.849 { 00:21:15.849 "name": null, 00:21:15.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.849 "is_configured": false, 00:21:15.849 "data_offset": 2048, 00:21:15.849 "data_size": 63488 00:21:15.849 }, 00:21:15.849 { 00:21:15.849 "name": "pt2", 00:21:15.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.849 "is_configured": true, 00:21:15.849 "data_offset": 2048, 00:21:15.849 "data_size": 63488 00:21:15.849 }, 00:21:15.849 { 00:21:15.849 "name": "pt3", 00:21:15.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:15.849 "is_configured": true, 00:21:15.849 "data_offset": 2048, 00:21:15.849 "data_size": 63488 00:21:15.849 } 00:21:15.849 ] 00:21:15.849 }' 00:21:15.849 06:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.849 06:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.418 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:16.677 [2024-08-13 06:16:18.229296] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.677 [2024-08-13 06:16:18.229325] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.677 [2024-08-13 06:16:18.229390] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.677 [2024-08-13 06:16:18.229443] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.677 [2024-08-13 06:16:18.229452] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:21:16.677 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.677 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:21:16.937 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:21:16.937 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:21:16.937 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:21:16.937 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:21:16.937 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:16.937 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:17.197 [2024-08-13 06:16:18.860257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:17.197 [2024-08-13 06:16:18.860325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.197 [2024-08-13 06:16:18.860348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:17.197 [2024-08-13 06:16:18.860357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.197 [2024-08-13 06:16:18.862611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.197 [2024-08-13 06:16:18.862655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:17.197 [2024-08-13 06:16:18.862739] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:17.197 [2024-08-13 06:16:18.862780] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:17.197 [2024-08-13 06:16:18.862897] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:17.197 [2024-08-13 06:16:18.862914] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.197 [2024-08-13 06:16:18.862930] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:21:17.197 [2024-08-13 06:16:18.862968] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.197 pt1 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.197 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.198 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.198 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.198 06:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.457 06:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.457 "name": "raid_bdev1", 00:21:17.457 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:17.457 "strip_size_kb": 64, 00:21:17.457 "state": "configuring", 00:21:17.457 "raid_level": "raid5f", 00:21:17.457 "superblock": true, 00:21:17.457 "num_base_bdevs": 3, 00:21:17.457 "num_base_bdevs_discovered": 1, 00:21:17.457 "num_base_bdevs_operational": 2, 00:21:17.457 "base_bdevs_list": [ 00:21:17.457 { 00:21:17.457 "name": null, 00:21:17.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.457 "is_configured": false, 00:21:17.457 "data_offset": 2048, 00:21:17.457 "data_size": 63488 00:21:17.457 }, 00:21:17.457 { 00:21:17.457 "name": "pt2", 00:21:17.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.457 "is_configured": true, 00:21:17.457 "data_offset": 2048, 00:21:17.457 "data_size": 63488 00:21:17.457 }, 00:21:17.457 { 00:21:17.457 "name": null, 00:21:17.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.457 "is_configured": false, 00:21:17.457 "data_offset": 2048, 00:21:17.457 "data_size": 63488 00:21:17.457 } 00:21:17.457 ] 00:21:17.457 }' 00:21:17.457 06:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.457 06:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.024 06:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:18.024 06:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:18.283 06:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:21:18.283 06:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:18.283 [2024-08-13 06:16:20.070189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:18.283 [2024-08-13 06:16:20.070301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.283 [2024-08-13 06:16:20.070338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:18.283 [2024-08-13 06:16:20.070348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.283 [2024-08-13 06:16:20.070753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.283 [2024-08-13 06:16:20.070771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:18.283 [2024-08-13 06:16:20.070849] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:18.283 [2024-08-13 06:16:20.070876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:18.283 [2024-08-13 06:16:20.070979] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:21:18.283 [2024-08-13 06:16:20.070987] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:18.283 [2024-08-13 06:16:20.071227] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:21:18.283 [2024-08-13 06:16:20.071657] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:21:18.283 [2024-08-13 06:16:20.071679] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:21:18.283 [2024-08-13 06:16:20.071825] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.283 pt3 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.543 "name": "raid_bdev1", 00:21:18.543 "uuid": "f47b2844-bfdc-4c0e-a74d-4316ace4ac1d", 00:21:18.543 "strip_size_kb": 64, 00:21:18.543 "state": "online", 00:21:18.543 "raid_level": "raid5f", 00:21:18.543 "superblock": true, 00:21:18.543 "num_base_bdevs": 3, 00:21:18.543 "num_base_bdevs_discovered": 2, 00:21:18.543 "num_base_bdevs_operational": 2, 00:21:18.543 "base_bdevs_list": [ 00:21:18.543 { 00:21:18.543 "name": null, 00:21:18.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.543 "is_configured": false, 00:21:18.543 "data_offset": 2048, 00:21:18.543 "data_size": 63488 00:21:18.543 }, 00:21:18.543 { 00:21:18.543 "name": "pt2", 00:21:18.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.543 "is_configured": true, 00:21:18.543 "data_offset": 2048, 00:21:18.543 "data_size": 63488 00:21:18.543 }, 00:21:18.543 { 00:21:18.543 "name": "pt3", 00:21:18.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.543 "is_configured": true, 00:21:18.543 "data_offset": 2048, 00:21:18.543 "data_size": 63488 00:21:18.543 } 00:21:18.543 ] 00:21:18.543 }' 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.543 06:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.111 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:19.111 06:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:19.371 06:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:21:19.371 06:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:19.371 06:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:21:19.631 [2024-08-13 06:16:21.272363] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' f47b2844-bfdc-4c0e-a74d-4316ace4ac1d '!=' f47b2844-bfdc-4c0e-a74d-4316ace4ac1d ']' 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 99016 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 99016 ']' 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 99016 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99016 00:21:19.631 killing process with pid 99016 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99016' 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 99016 00:21:19.631 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 99016 00:21:19.631 [2024-08-13 06:16:21.319455] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.631 [2024-08-13 06:16:21.319524] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.631 [2024-08-13 06:16:21.319581] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.631 [2024-08-13 06:16:21.319592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:21:19.631 [2024-08-13 06:16:21.352491] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:19.891 06:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:21:19.891 00:21:19.891 real 0m19.185s 00:21:19.891 user 0m35.114s 00:21:19.891 sys 0m3.248s 00:21:19.891 ************************************ 00:21:19.891 END TEST raid5f_superblock_test 00:21:19.891 ************************************ 00:21:19.891 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:19.891 06:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.891 06:16:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # '[' true = true ']' 00:21:19.891 06:16:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:21:19.891 06:16:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:19.891 06:16:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:19.891 06:16:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:19.891 ************************************ 00:21:19.891 START TEST raid5f_rebuild_test 00:21:19.891 ************************************ 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 false false true 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:19.891 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=99696 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 99696 /var/tmp/spdk-raid.sock 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 99696 ']' 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:20.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:20.151 06:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.151 [2024-08-13 06:16:21.783064] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:21:20.151 [2024-08-13 06:16:21.783305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99696 ] 00:21:20.151 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:20.151 Zero copy mechanism will not be used. 00:21:20.151 [2024-08-13 06:16:21.930064] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.410 [2024-08-13 06:16:21.977742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.411 [2024-08-13 06:16:22.020778] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.411 [2024-08-13 06:16:22.020819] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.979 06:16:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:20.980 06:16:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:21:20.980 06:16:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:20.980 06:16:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:20.980 BaseBdev1_malloc 00:21:20.980 06:16:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:21.239 [2024-08-13 06:16:22.905112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:21.239 [2024-08-13 06:16:22.905196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.239 [2024-08-13 06:16:22.905227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:21.239 [2024-08-13 06:16:22.905240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.239 [2024-08-13 06:16:22.907360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.239 [2024-08-13 06:16:22.907407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.239 BaseBdev1 00:21:21.239 06:16:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:21.239 06:16:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:21.498 BaseBdev2_malloc 00:21:21.498 06:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:21.758 [2024-08-13 06:16:23.312999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:21.758 [2024-08-13 06:16:23.313084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.758 [2024-08-13 06:16:23.313107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:21.758 [2024-08-13 06:16:23.313117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.758 [2024-08-13 06:16:23.315263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.758 [2024-08-13 06:16:23.315306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:21.758 BaseBdev2 00:21:21.758 06:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:21.758 06:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:21.758 BaseBdev3_malloc 00:21:21.758 06:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:22.020 [2024-08-13 06:16:23.661417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:22.020 [2024-08-13 06:16:23.661485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.020 [2024-08-13 06:16:23.661508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:22.020 [2024-08-13 06:16:23.661520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.020 [2024-08-13 06:16:23.663642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.020 [2024-08-13 06:16:23.663682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:22.020 BaseBdev3 00:21:22.020 06:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:22.280 spare_malloc 00:21:22.280 06:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:22.280 spare_delay 00:21:22.280 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:22.539 [2024-08-13 06:16:24.205255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.539 [2024-08-13 06:16:24.205330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.539 [2024-08-13 06:16:24.205356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:22.539 [2024-08-13 06:16:24.205370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.539 [2024-08-13 06:16:24.207477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.539 [2024-08-13 06:16:24.207520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:22.539 spare 00:21:22.539 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:21:22.798 [2024-08-13 06:16:24.401093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.798 [2024-08-13 06:16:24.402783] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:22.798 [2024-08-13 06:16:24.402849] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:22.798 [2024-08-13 06:16:24.402935] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:21:22.798 [2024-08-13 06:16:24.402943] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:22.798 [2024-08-13 06:16:24.403249] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:21:22.798 [2024-08-13 06:16:24.403642] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:21:22.798 [2024-08-13 06:16:24.403660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:21:22.798 [2024-08-13 06:16:24.403797] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.798 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.058 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.058 "name": "raid_bdev1", 00:21:23.058 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:23.058 "strip_size_kb": 64, 00:21:23.058 "state": "online", 00:21:23.058 "raid_level": "raid5f", 00:21:23.058 "superblock": false, 00:21:23.058 "num_base_bdevs": 3, 00:21:23.058 "num_base_bdevs_discovered": 3, 00:21:23.058 "num_base_bdevs_operational": 3, 00:21:23.058 "base_bdevs_list": [ 00:21:23.058 { 00:21:23.058 "name": "BaseBdev1", 00:21:23.058 "uuid": "906d1949-c25d-52a1-9f56-b7b65e9a1740", 00:21:23.058 "is_configured": true, 00:21:23.058 "data_offset": 0, 00:21:23.058 "data_size": 65536 00:21:23.058 }, 00:21:23.058 { 00:21:23.058 "name": "BaseBdev2", 00:21:23.058 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:23.058 "is_configured": true, 00:21:23.058 "data_offset": 0, 00:21:23.058 "data_size": 65536 00:21:23.058 }, 00:21:23.058 { 00:21:23.058 "name": "BaseBdev3", 00:21:23.058 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:23.058 "is_configured": true, 00:21:23.058 "data_offset": 0, 00:21:23.058 "data_size": 65536 00:21:23.058 } 00:21:23.058 ] 00:21:23.058 }' 00:21:23.058 06:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.058 06:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.628 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:23.628 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:21:23.628 [2024-08-13 06:16:25.307767] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.628 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=131072 00:21:23.628 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.628 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.888 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:23.888 [2024-08-13 06:16:25.675001] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:21:24.148 /dev/nbd0 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.148 1+0 records in 00:21:24.148 1+0 records out 00:21:24.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558926 s, 7.3 MB/s 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 128 00:21:24.148 06:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:21:24.408 512+0 records in 00:21:24.408 512+0 records out 00:21:24.408 67108864 bytes (67 MB, 64 MiB) copied, 0.333541 s, 201 MB/s 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.408 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:24.668 [2024-08-13 06:16:26.300187] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.668 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:24.929 [2024-08-13 06:16:26.471976] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.929 "name": "raid_bdev1", 00:21:24.929 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:24.929 "strip_size_kb": 64, 00:21:24.929 "state": "online", 00:21:24.929 "raid_level": "raid5f", 00:21:24.929 "superblock": false, 00:21:24.929 "num_base_bdevs": 3, 00:21:24.929 "num_base_bdevs_discovered": 2, 00:21:24.929 "num_base_bdevs_operational": 2, 00:21:24.929 "base_bdevs_list": [ 00:21:24.929 { 00:21:24.929 "name": null, 00:21:24.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.929 "is_configured": false, 00:21:24.929 "data_offset": 0, 00:21:24.929 "data_size": 65536 00:21:24.929 }, 00:21:24.929 { 00:21:24.929 "name": "BaseBdev2", 00:21:24.929 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:24.929 "is_configured": true, 00:21:24.929 "data_offset": 0, 00:21:24.929 "data_size": 65536 00:21:24.929 }, 00:21:24.929 { 00:21:24.929 "name": "BaseBdev3", 00:21:24.929 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:24.929 "is_configured": true, 00:21:24.929 "data_offset": 0, 00:21:24.929 "data_size": 65536 00:21:24.929 } 00:21:24.929 ] 00:21:24.929 }' 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.929 06:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.498 06:16:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:25.758 [2024-08-13 06:16:27.394498] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:25.758 [2024-08-13 06:16:27.398447] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:21:25.758 [2024-08-13 06:16:27.400590] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.758 06:16:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:26.697 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.697 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:26.697 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:26.697 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:26.697 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:26.697 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.698 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.957 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.957 "name": "raid_bdev1", 00:21:26.957 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:26.957 "strip_size_kb": 64, 00:21:26.957 "state": "online", 00:21:26.957 "raid_level": "raid5f", 00:21:26.957 "superblock": false, 00:21:26.957 "num_base_bdevs": 3, 00:21:26.957 "num_base_bdevs_discovered": 3, 00:21:26.957 "num_base_bdevs_operational": 3, 00:21:26.957 "process": { 00:21:26.957 "type": "rebuild", 00:21:26.957 "target": "spare", 00:21:26.957 "progress": { 00:21:26.957 "blocks": 22528, 00:21:26.957 "percent": 17 00:21:26.957 } 00:21:26.957 }, 00:21:26.957 "base_bdevs_list": [ 00:21:26.957 { 00:21:26.957 "name": "spare", 00:21:26.957 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:26.957 "is_configured": true, 00:21:26.957 "data_offset": 0, 00:21:26.957 "data_size": 65536 00:21:26.957 }, 00:21:26.958 { 00:21:26.958 "name": "BaseBdev2", 00:21:26.958 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:26.958 "is_configured": true, 00:21:26.958 "data_offset": 0, 00:21:26.958 "data_size": 65536 00:21:26.958 }, 00:21:26.958 { 00:21:26.958 "name": "BaseBdev3", 00:21:26.958 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:26.958 "is_configured": true, 00:21:26.958 "data_offset": 0, 00:21:26.958 "data_size": 65536 00:21:26.958 } 00:21:26.958 ] 00:21:26.958 }' 00:21:26.958 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:26.958 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.958 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:26.958 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.958 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:27.218 [2024-08-13 06:16:28.885111] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.218 [2024-08-13 06:16:28.909271] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.218 [2024-08-13 06:16:28.909326] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.218 [2024-08-13 06:16:28.909344] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.218 [2024-08-13 06:16:28.909352] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.218 06:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.478 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.478 "name": "raid_bdev1", 00:21:27.478 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:27.478 "strip_size_kb": 64, 00:21:27.478 "state": "online", 00:21:27.478 "raid_level": "raid5f", 00:21:27.478 "superblock": false, 00:21:27.478 "num_base_bdevs": 3, 00:21:27.478 "num_base_bdevs_discovered": 2, 00:21:27.478 "num_base_bdevs_operational": 2, 00:21:27.478 "base_bdevs_list": [ 00:21:27.478 { 00:21:27.478 "name": null, 00:21:27.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.478 "is_configured": false, 00:21:27.478 "data_offset": 0, 00:21:27.478 "data_size": 65536 00:21:27.478 }, 00:21:27.478 { 00:21:27.478 "name": "BaseBdev2", 00:21:27.478 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:27.478 "is_configured": true, 00:21:27.478 "data_offset": 0, 00:21:27.478 "data_size": 65536 00:21:27.478 }, 00:21:27.478 { 00:21:27.478 "name": "BaseBdev3", 00:21:27.478 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:27.478 "is_configured": true, 00:21:27.478 "data_offset": 0, 00:21:27.478 "data_size": 65536 00:21:27.478 } 00:21:27.478 ] 00:21:27.478 }' 00:21:27.478 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.478 06:16:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.048 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.308 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.308 "name": "raid_bdev1", 00:21:28.308 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:28.308 "strip_size_kb": 64, 00:21:28.308 "state": "online", 00:21:28.308 "raid_level": "raid5f", 00:21:28.308 "superblock": false, 00:21:28.308 "num_base_bdevs": 3, 00:21:28.308 "num_base_bdevs_discovered": 2, 00:21:28.308 "num_base_bdevs_operational": 2, 00:21:28.308 "base_bdevs_list": [ 00:21:28.308 { 00:21:28.308 "name": null, 00:21:28.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.308 "is_configured": false, 00:21:28.308 "data_offset": 0, 00:21:28.308 "data_size": 65536 00:21:28.308 }, 00:21:28.308 { 00:21:28.308 "name": "BaseBdev2", 00:21:28.308 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:28.308 "is_configured": true, 00:21:28.308 "data_offset": 0, 00:21:28.308 "data_size": 65536 00:21:28.308 }, 00:21:28.308 { 00:21:28.308 "name": "BaseBdev3", 00:21:28.308 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:28.308 "is_configured": true, 00:21:28.308 "data_offset": 0, 00:21:28.308 "data_size": 65536 00:21:28.308 } 00:21:28.308 ] 00:21:28.308 }' 00:21:28.308 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:28.308 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:28.308 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:28.308 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.308 06:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.568 [2024-08-13 06:16:30.140811] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.568 [2024-08-13 06:16:30.144572] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:21:28.568 [2024-08-13 06:16:30.146566] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.568 06:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.507 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:29.767 "name": "raid_bdev1", 00:21:29.767 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:29.767 "strip_size_kb": 64, 00:21:29.767 "state": "online", 00:21:29.767 "raid_level": "raid5f", 00:21:29.767 "superblock": false, 00:21:29.767 "num_base_bdevs": 3, 00:21:29.767 "num_base_bdevs_discovered": 3, 00:21:29.767 "num_base_bdevs_operational": 3, 00:21:29.767 "process": { 00:21:29.767 "type": "rebuild", 00:21:29.767 "target": "spare", 00:21:29.767 "progress": { 00:21:29.767 "blocks": 24576, 00:21:29.767 "percent": 18 00:21:29.767 } 00:21:29.767 }, 00:21:29.767 "base_bdevs_list": [ 00:21:29.767 { 00:21:29.767 "name": "spare", 00:21:29.767 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:29.767 "is_configured": true, 00:21:29.767 "data_offset": 0, 00:21:29.767 "data_size": 65536 00:21:29.767 }, 00:21:29.767 { 00:21:29.767 "name": "BaseBdev2", 00:21:29.767 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:29.767 "is_configured": true, 00:21:29.767 "data_offset": 0, 00:21:29.767 "data_size": 65536 00:21:29.767 }, 00:21:29.767 { 00:21:29.767 "name": "BaseBdev3", 00:21:29.767 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:29.767 "is_configured": true, 00:21:29.767 "data_offset": 0, 00:21:29.767 "data_size": 65536 00:21:29.767 } 00:21:29.767 ] 00:21:29.767 }' 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=943 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.767 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.027 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.027 "name": "raid_bdev1", 00:21:30.027 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:30.027 "strip_size_kb": 64, 00:21:30.027 "state": "online", 00:21:30.027 "raid_level": "raid5f", 00:21:30.027 "superblock": false, 00:21:30.027 "num_base_bdevs": 3, 00:21:30.027 "num_base_bdevs_discovered": 3, 00:21:30.027 "num_base_bdevs_operational": 3, 00:21:30.027 "process": { 00:21:30.027 "type": "rebuild", 00:21:30.027 "target": "spare", 00:21:30.027 "progress": { 00:21:30.027 "blocks": 30720, 00:21:30.027 "percent": 23 00:21:30.027 } 00:21:30.027 }, 00:21:30.027 "base_bdevs_list": [ 00:21:30.027 { 00:21:30.027 "name": "spare", 00:21:30.027 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:30.027 "is_configured": true, 00:21:30.027 "data_offset": 0, 00:21:30.027 "data_size": 65536 00:21:30.027 }, 00:21:30.027 { 00:21:30.027 "name": "BaseBdev2", 00:21:30.027 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:30.027 "is_configured": true, 00:21:30.027 "data_offset": 0, 00:21:30.027 "data_size": 65536 00:21:30.027 }, 00:21:30.027 { 00:21:30.027 "name": "BaseBdev3", 00:21:30.027 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:30.027 "is_configured": true, 00:21:30.027 "data_offset": 0, 00:21:30.027 "data_size": 65536 00:21:30.027 } 00:21:30.027 ] 00:21:30.027 }' 00:21:30.028 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:30.028 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.028 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:30.028 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.028 06:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:31.408 "name": "raid_bdev1", 00:21:31.408 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:31.408 "strip_size_kb": 64, 00:21:31.408 "state": "online", 00:21:31.408 "raid_level": "raid5f", 00:21:31.408 "superblock": false, 00:21:31.408 "num_base_bdevs": 3, 00:21:31.408 "num_base_bdevs_discovered": 3, 00:21:31.408 "num_base_bdevs_operational": 3, 00:21:31.408 "process": { 00:21:31.408 "type": "rebuild", 00:21:31.408 "target": "spare", 00:21:31.408 "progress": { 00:21:31.408 "blocks": 57344, 00:21:31.408 "percent": 43 00:21:31.408 } 00:21:31.408 }, 00:21:31.408 "base_bdevs_list": [ 00:21:31.408 { 00:21:31.408 "name": "spare", 00:21:31.408 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:31.408 "is_configured": true, 00:21:31.408 "data_offset": 0, 00:21:31.408 "data_size": 65536 00:21:31.408 }, 00:21:31.408 { 00:21:31.408 "name": "BaseBdev2", 00:21:31.408 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:31.408 "is_configured": true, 00:21:31.408 "data_offset": 0, 00:21:31.408 "data_size": 65536 00:21:31.408 }, 00:21:31.408 { 00:21:31.408 "name": "BaseBdev3", 00:21:31.408 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:31.408 "is_configured": true, 00:21:31.408 "data_offset": 0, 00:21:31.408 "data_size": 65536 00:21:31.408 } 00:21:31.408 ] 00:21:31.408 }' 00:21:31.408 06:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:31.408 06:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.408 06:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:31.408 06:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.408 06:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.346 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.606 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.606 "name": "raid_bdev1", 00:21:32.606 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:32.606 "strip_size_kb": 64, 00:21:32.606 "state": "online", 00:21:32.606 "raid_level": "raid5f", 00:21:32.606 "superblock": false, 00:21:32.606 "num_base_bdevs": 3, 00:21:32.606 "num_base_bdevs_discovered": 3, 00:21:32.606 "num_base_bdevs_operational": 3, 00:21:32.606 "process": { 00:21:32.606 "type": "rebuild", 00:21:32.606 "target": "spare", 00:21:32.606 "progress": { 00:21:32.606 "blocks": 81920, 00:21:32.606 "percent": 62 00:21:32.606 } 00:21:32.606 }, 00:21:32.606 "base_bdevs_list": [ 00:21:32.606 { 00:21:32.606 "name": "spare", 00:21:32.606 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 0, 00:21:32.606 "data_size": 65536 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "name": "BaseBdev2", 00:21:32.606 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 0, 00:21:32.606 "data_size": 65536 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "name": "BaseBdev3", 00:21:32.606 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 0, 00:21:32.606 "data_size": 65536 00:21:32.606 } 00:21:32.606 ] 00:21:32.606 }' 00:21:32.606 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:32.606 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.606 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:32.606 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.606 06:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:33.986 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:33.986 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.986 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:33.986 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:33.987 "name": "raid_bdev1", 00:21:33.987 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:33.987 "strip_size_kb": 64, 00:21:33.987 "state": "online", 00:21:33.987 "raid_level": "raid5f", 00:21:33.987 "superblock": false, 00:21:33.987 "num_base_bdevs": 3, 00:21:33.987 "num_base_bdevs_discovered": 3, 00:21:33.987 "num_base_bdevs_operational": 3, 00:21:33.987 "process": { 00:21:33.987 "type": "rebuild", 00:21:33.987 "target": "spare", 00:21:33.987 "progress": { 00:21:33.987 "blocks": 108544, 00:21:33.987 "percent": 82 00:21:33.987 } 00:21:33.987 }, 00:21:33.987 "base_bdevs_list": [ 00:21:33.987 { 00:21:33.987 "name": "spare", 00:21:33.987 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:33.987 "is_configured": true, 00:21:33.987 "data_offset": 0, 00:21:33.987 "data_size": 65536 00:21:33.987 }, 00:21:33.987 { 00:21:33.987 "name": "BaseBdev2", 00:21:33.987 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:33.987 "is_configured": true, 00:21:33.987 "data_offset": 0, 00:21:33.987 "data_size": 65536 00:21:33.987 }, 00:21:33.987 { 00:21:33.987 "name": "BaseBdev3", 00:21:33.987 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:33.987 "is_configured": true, 00:21:33.987 "data_offset": 0, 00:21:33.987 "data_size": 65536 00:21:33.987 } 00:21:33.987 ] 00:21:33.987 }' 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.987 06:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:34.954 [2024-08-13 06:16:36.579775] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:34.954 [2024-08-13 06:16:36.579855] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:34.954 [2024-08-13 06:16:36.579890] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.954 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.245 "name": "raid_bdev1", 00:21:35.245 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:35.245 "strip_size_kb": 64, 00:21:35.245 "state": "online", 00:21:35.245 "raid_level": "raid5f", 00:21:35.245 "superblock": false, 00:21:35.245 "num_base_bdevs": 3, 00:21:35.245 "num_base_bdevs_discovered": 3, 00:21:35.245 "num_base_bdevs_operational": 3, 00:21:35.245 "base_bdevs_list": [ 00:21:35.245 { 00:21:35.245 "name": "spare", 00:21:35.245 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:35.245 "is_configured": true, 00:21:35.245 "data_offset": 0, 00:21:35.245 "data_size": 65536 00:21:35.245 }, 00:21:35.245 { 00:21:35.245 "name": "BaseBdev2", 00:21:35.245 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:35.245 "is_configured": true, 00:21:35.245 "data_offset": 0, 00:21:35.245 "data_size": 65536 00:21:35.245 }, 00:21:35.245 { 00:21:35.245 "name": "BaseBdev3", 00:21:35.245 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:35.245 "is_configured": true, 00:21:35.245 "data_offset": 0, 00:21:35.245 "data_size": 65536 00:21:35.245 } 00:21:35.245 ] 00:21:35.245 }' 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.245 06:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.523 "name": "raid_bdev1", 00:21:35.523 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:35.523 "strip_size_kb": 64, 00:21:35.523 "state": "online", 00:21:35.523 "raid_level": "raid5f", 00:21:35.523 "superblock": false, 00:21:35.523 "num_base_bdevs": 3, 00:21:35.523 "num_base_bdevs_discovered": 3, 00:21:35.523 "num_base_bdevs_operational": 3, 00:21:35.523 "base_bdevs_list": [ 00:21:35.523 { 00:21:35.523 "name": "spare", 00:21:35.523 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:35.523 "is_configured": true, 00:21:35.523 "data_offset": 0, 00:21:35.523 "data_size": 65536 00:21:35.523 }, 00:21:35.523 { 00:21:35.523 "name": "BaseBdev2", 00:21:35.523 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:35.523 "is_configured": true, 00:21:35.523 "data_offset": 0, 00:21:35.523 "data_size": 65536 00:21:35.523 }, 00:21:35.523 { 00:21:35.523 "name": "BaseBdev3", 00:21:35.523 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:35.523 "is_configured": true, 00:21:35.523 "data_offset": 0, 00:21:35.523 "data_size": 65536 00:21:35.523 } 00:21:35.523 ] 00:21:35.523 }' 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.523 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.783 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.783 "name": "raid_bdev1", 00:21:35.783 "uuid": "066d4bcf-af4a-42b7-8e5b-9f630ac7677b", 00:21:35.783 "strip_size_kb": 64, 00:21:35.783 "state": "online", 00:21:35.783 "raid_level": "raid5f", 00:21:35.783 "superblock": false, 00:21:35.783 "num_base_bdevs": 3, 00:21:35.783 "num_base_bdevs_discovered": 3, 00:21:35.783 "num_base_bdevs_operational": 3, 00:21:35.783 "base_bdevs_list": [ 00:21:35.783 { 00:21:35.783 "name": "spare", 00:21:35.783 "uuid": "97a15d64-cad6-542a-ab21-bf506c12eb28", 00:21:35.783 "is_configured": true, 00:21:35.783 "data_offset": 0, 00:21:35.783 "data_size": 65536 00:21:35.783 }, 00:21:35.783 { 00:21:35.783 "name": "BaseBdev2", 00:21:35.783 "uuid": "a0231093-4b07-5d0e-bfbb-3f2d2d1a128f", 00:21:35.783 "is_configured": true, 00:21:35.783 "data_offset": 0, 00:21:35.783 "data_size": 65536 00:21:35.783 }, 00:21:35.783 { 00:21:35.783 "name": "BaseBdev3", 00:21:35.783 "uuid": "b9eb7276-ada0-5260-9cb8-14764a9ace4e", 00:21:35.783 "is_configured": true, 00:21:35.783 "data_offset": 0, 00:21:35.783 "data_size": 65536 00:21:35.783 } 00:21:35.783 ] 00:21:35.783 }' 00:21:35.783 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.783 06:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.352 06:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:36.352 [2024-08-13 06:16:38.134269] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.352 [2024-08-13 06:16:38.134314] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.352 [2024-08-13 06:16:38.134555] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.352 [2024-08-13 06:16:38.134681] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.352 [2024-08-13 06:16:38.134699] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.612 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:36.872 /dev/nbd0 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.872 1+0 records in 00:21:36.872 1+0 records out 00:21:36.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00377562 s, 1.1 MB/s 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.872 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:37.132 /dev/nbd1 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.132 1+0 records in 00:21:37.132 1+0 records out 00:21:37.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396796 s, 10.3 MB/s 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.132 06:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.391 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 99696 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 99696 ']' 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 99696 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99696 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99696' 00:21:37.652 killing process with pid 99696 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 99696 00:21:37.652 Received shutdown signal, test time was about 60.000000 seconds 00:21:37.652 00:21:37.652 Latency(us) 00:21:37.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.652 =================================================================================================================== 00:21:37.652 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.652 [2024-08-13 06:16:39.359897] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.652 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 99696 00:21:37.652 [2024-08-13 06:16:39.400655] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.911 06:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:21:37.911 00:21:37.911 real 0m17.950s 00:21:37.911 user 0m26.403s 00:21:37.911 sys 0m2.810s 00:21:37.911 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:37.911 06:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.911 ************************************ 00:21:37.911 END TEST raid5f_rebuild_test 00:21:37.911 ************************************ 00:21:37.911 06:16:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:21:37.911 06:16:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:21:37.911 06:16:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:37.911 06:16:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.171 ************************************ 00:21:38.171 START TEST raid5f_rebuild_test_sb 00:21:38.171 ************************************ 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 true false true 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:38.171 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=100163 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 100163 /var/tmp/spdk-raid.sock 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 100163 ']' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:38.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:38.172 06:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.172 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:38.172 Zero copy mechanism will not be used. 00:21:38.172 [2024-08-13 06:16:39.804961] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:21:38.172 [2024-08-13 06:16:39.805098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100163 ] 00:21:38.172 [2024-08-13 06:16:39.948778] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.432 [2024-08-13 06:16:39.994261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.432 [2024-08-13 06:16:40.036889] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.432 [2024-08-13 06:16:40.036929] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.001 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:39.001 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:21:39.001 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:39.001 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:39.261 BaseBdev1_malloc 00:21:39.261 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:39.261 [2024-08-13 06:16:40.977114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:39.261 [2024-08-13 06:16:40.977186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.261 [2024-08-13 06:16:40.977212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:39.261 [2024-08-13 06:16:40.977222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.261 [2024-08-13 06:16:40.979330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.261 [2024-08-13 06:16:40.979373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:39.261 BaseBdev1 00:21:39.261 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:39.261 06:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:39.521 BaseBdev2_malloc 00:21:39.521 06:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:39.781 [2024-08-13 06:16:41.341290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:39.781 [2024-08-13 06:16:41.341364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.781 [2024-08-13 06:16:41.341387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:39.781 [2024-08-13 06:16:41.341398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.781 [2024-08-13 06:16:41.343523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.781 [2024-08-13 06:16:41.343565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:39.781 BaseBdev2 00:21:39.781 06:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:39.781 06:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:39.781 BaseBdev3_malloc 00:21:40.041 06:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:40.041 [2024-08-13 06:16:41.761641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:40.041 [2024-08-13 06:16:41.761713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.041 [2024-08-13 06:16:41.761737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:40.041 [2024-08-13 06:16:41.761748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.041 [2024-08-13 06:16:41.763822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.041 [2024-08-13 06:16:41.763861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:40.041 BaseBdev3 00:21:40.041 06:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:40.300 spare_malloc 00:21:40.301 06:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:40.560 spare_delay 00:21:40.560 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:40.560 [2024-08-13 06:16:42.345242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:40.560 [2024-08-13 06:16:42.345307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.560 [2024-08-13 06:16:42.345330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:40.560 [2024-08-13 06:16:42.345344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.560 [2024-08-13 06:16:42.347387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.560 [2024-08-13 06:16:42.347427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:40.560 spare 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:21:40.819 [2024-08-13 06:16:42.509062] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.819 [2024-08-13 06:16:42.510714] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.819 [2024-08-13 06:16:42.510773] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.819 [2024-08-13 06:16:42.510924] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:21:40.819 [2024-08-13 06:16:42.510941] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:40.819 [2024-08-13 06:16:42.511214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:21:40.819 [2024-08-13 06:16:42.511620] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:21:40.819 [2024-08-13 06:16:42.511641] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:21:40.819 [2024-08-13 06:16:42.511752] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:40.819 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.820 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.820 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.820 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.820 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.820 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.079 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.079 "name": "raid_bdev1", 00:21:41.079 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:41.079 "strip_size_kb": 64, 00:21:41.079 "state": "online", 00:21:41.079 "raid_level": "raid5f", 00:21:41.079 "superblock": true, 00:21:41.079 "num_base_bdevs": 3, 00:21:41.079 "num_base_bdevs_discovered": 3, 00:21:41.079 "num_base_bdevs_operational": 3, 00:21:41.079 "base_bdevs_list": [ 00:21:41.079 { 00:21:41.079 "name": "BaseBdev1", 00:21:41.079 "uuid": "bea0edcf-4934-5804-80fc-39d247e92f75", 00:21:41.079 "is_configured": true, 00:21:41.079 "data_offset": 2048, 00:21:41.079 "data_size": 63488 00:21:41.079 }, 00:21:41.079 { 00:21:41.079 "name": "BaseBdev2", 00:21:41.079 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:41.079 "is_configured": true, 00:21:41.079 "data_offset": 2048, 00:21:41.079 "data_size": 63488 00:21:41.079 }, 00:21:41.079 { 00:21:41.079 "name": "BaseBdev3", 00:21:41.079 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:41.079 "is_configured": true, 00:21:41.079 "data_offset": 2048, 00:21:41.079 "data_size": 63488 00:21:41.079 } 00:21:41.079 ] 00:21:41.079 }' 00:21:41.079 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.079 06:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.649 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:41.649 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:21:41.649 [2024-08-13 06:16:43.415736] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.649 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=126976 00:21:41.649 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.649 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.909 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:42.169 [2024-08-13 06:16:43.802913] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:21:42.169 /dev/nbd0 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:42.169 1+0 records in 00:21:42.169 1+0 records out 00:21:42.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478319 s, 8.6 MB/s 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 128 00:21:42.169 06:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:21:42.428 496+0 records in 00:21:42.428 496+0 records out 00:21:42.428 65011712 bytes (65 MB, 62 MiB) copied, 0.306656 s, 212 MB/s 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.428 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.688 [2024-08-13 06:16:44.396044] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.688 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:42.948 [2024-08-13 06:16:44.585298] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.948 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.207 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.207 "name": "raid_bdev1", 00:21:43.207 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:43.208 "strip_size_kb": 64, 00:21:43.208 "state": "online", 00:21:43.208 "raid_level": "raid5f", 00:21:43.208 "superblock": true, 00:21:43.208 "num_base_bdevs": 3, 00:21:43.208 "num_base_bdevs_discovered": 2, 00:21:43.208 "num_base_bdevs_operational": 2, 00:21:43.208 "base_bdevs_list": [ 00:21:43.208 { 00:21:43.208 "name": null, 00:21:43.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.208 "is_configured": false, 00:21:43.208 "data_offset": 2048, 00:21:43.208 "data_size": 63488 00:21:43.208 }, 00:21:43.208 { 00:21:43.208 "name": "BaseBdev2", 00:21:43.208 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:43.208 "is_configured": true, 00:21:43.208 "data_offset": 2048, 00:21:43.208 "data_size": 63488 00:21:43.208 }, 00:21:43.208 { 00:21:43.208 "name": "BaseBdev3", 00:21:43.208 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:43.208 "is_configured": true, 00:21:43.208 "data_offset": 2048, 00:21:43.208 "data_size": 63488 00:21:43.208 } 00:21:43.208 ] 00:21:43.208 }' 00:21:43.208 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.208 06:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.777 06:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.777 [2024-08-13 06:16:45.515755] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.777 [2024-08-13 06:16:45.519535] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:21:43.777 [2024-08-13 06:16:45.521487] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:43.777 06:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:45.156 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:45.156 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:45.156 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:45.156 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:45.156 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:45.157 "name": "raid_bdev1", 00:21:45.157 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:45.157 "strip_size_kb": 64, 00:21:45.157 "state": "online", 00:21:45.157 "raid_level": "raid5f", 00:21:45.157 "superblock": true, 00:21:45.157 "num_base_bdevs": 3, 00:21:45.157 "num_base_bdevs_discovered": 3, 00:21:45.157 "num_base_bdevs_operational": 3, 00:21:45.157 "process": { 00:21:45.157 "type": "rebuild", 00:21:45.157 "target": "spare", 00:21:45.157 "progress": { 00:21:45.157 "blocks": 22528, 00:21:45.157 "percent": 17 00:21:45.157 } 00:21:45.157 }, 00:21:45.157 "base_bdevs_list": [ 00:21:45.157 { 00:21:45.157 "name": "spare", 00:21:45.157 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:45.157 "is_configured": true, 00:21:45.157 "data_offset": 2048, 00:21:45.157 "data_size": 63488 00:21:45.157 }, 00:21:45.157 { 00:21:45.157 "name": "BaseBdev2", 00:21:45.157 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:45.157 "is_configured": true, 00:21:45.157 "data_offset": 2048, 00:21:45.157 "data_size": 63488 00:21:45.157 }, 00:21:45.157 { 00:21:45.157 "name": "BaseBdev3", 00:21:45.157 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:45.157 "is_configured": true, 00:21:45.157 "data_offset": 2048, 00:21:45.157 "data_size": 63488 00:21:45.157 } 00:21:45.157 ] 00:21:45.157 }' 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.157 06:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:45.416 [2024-08-13 06:16:47.011972] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.416 [2024-08-13 06:16:47.030243] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:45.416 [2024-08-13 06:16:47.030297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.416 [2024-08-13 06:16:47.030314] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.416 [2024-08-13 06:16:47.030321] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.416 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.675 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.675 "name": "raid_bdev1", 00:21:45.675 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:45.675 "strip_size_kb": 64, 00:21:45.675 "state": "online", 00:21:45.675 "raid_level": "raid5f", 00:21:45.675 "superblock": true, 00:21:45.675 "num_base_bdevs": 3, 00:21:45.675 "num_base_bdevs_discovered": 2, 00:21:45.675 "num_base_bdevs_operational": 2, 00:21:45.675 "base_bdevs_list": [ 00:21:45.675 { 00:21:45.675 "name": null, 00:21:45.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.675 "is_configured": false, 00:21:45.675 "data_offset": 2048, 00:21:45.675 "data_size": 63488 00:21:45.675 }, 00:21:45.675 { 00:21:45.675 "name": "BaseBdev2", 00:21:45.675 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:45.675 "is_configured": true, 00:21:45.675 "data_offset": 2048, 00:21:45.675 "data_size": 63488 00:21:45.675 }, 00:21:45.675 { 00:21:45.675 "name": "BaseBdev3", 00:21:45.675 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:45.675 "is_configured": true, 00:21:45.675 "data_offset": 2048, 00:21:45.675 "data_size": 63488 00:21:45.675 } 00:21:45.675 ] 00:21:45.675 }' 00:21:45.675 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.675 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.245 06:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.245 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:46.245 "name": "raid_bdev1", 00:21:46.245 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:46.245 "strip_size_kb": 64, 00:21:46.245 "state": "online", 00:21:46.245 "raid_level": "raid5f", 00:21:46.245 "superblock": true, 00:21:46.245 "num_base_bdevs": 3, 00:21:46.245 "num_base_bdevs_discovered": 2, 00:21:46.245 "num_base_bdevs_operational": 2, 00:21:46.245 "base_bdevs_list": [ 00:21:46.245 { 00:21:46.245 "name": null, 00:21:46.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.245 "is_configured": false, 00:21:46.245 "data_offset": 2048, 00:21:46.245 "data_size": 63488 00:21:46.245 }, 00:21:46.245 { 00:21:46.245 "name": "BaseBdev2", 00:21:46.245 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:46.245 "is_configured": true, 00:21:46.245 "data_offset": 2048, 00:21:46.245 "data_size": 63488 00:21:46.245 }, 00:21:46.245 { 00:21:46.245 "name": "BaseBdev3", 00:21:46.245 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:46.245 "is_configured": true, 00:21:46.245 "data_offset": 2048, 00:21:46.245 "data_size": 63488 00:21:46.245 } 00:21:46.245 ] 00:21:46.245 }' 00:21:46.245 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:46.504 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:46.504 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:46.504 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:46.504 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.504 [2024-08-13 06:16:48.253737] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.504 [2024-08-13 06:16:48.257208] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:21:46.504 [2024-08-13 06:16:48.259199] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.504 06:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:47.885 "name": "raid_bdev1", 00:21:47.885 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:47.885 "strip_size_kb": 64, 00:21:47.885 "state": "online", 00:21:47.885 "raid_level": "raid5f", 00:21:47.885 "superblock": true, 00:21:47.885 "num_base_bdevs": 3, 00:21:47.885 "num_base_bdevs_discovered": 3, 00:21:47.885 "num_base_bdevs_operational": 3, 00:21:47.885 "process": { 00:21:47.885 "type": "rebuild", 00:21:47.885 "target": "spare", 00:21:47.885 "progress": { 00:21:47.885 "blocks": 24576, 00:21:47.885 "percent": 19 00:21:47.885 } 00:21:47.885 }, 00:21:47.885 "base_bdevs_list": [ 00:21:47.885 { 00:21:47.885 "name": "spare", 00:21:47.885 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:47.885 "is_configured": true, 00:21:47.885 "data_offset": 2048, 00:21:47.885 "data_size": 63488 00:21:47.885 }, 00:21:47.885 { 00:21:47.885 "name": "BaseBdev2", 00:21:47.885 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:47.885 "is_configured": true, 00:21:47.885 "data_offset": 2048, 00:21:47.885 "data_size": 63488 00:21:47.885 }, 00:21:47.885 { 00:21:47.885 "name": "BaseBdev3", 00:21:47.885 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:47.885 "is_configured": true, 00:21:47.885 "data_offset": 2048, 00:21:47.885 "data_size": 63488 00:21:47.885 } 00:21:47.885 ] 00:21:47.885 }' 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:21:47.885 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=961 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.885 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.145 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:48.145 "name": "raid_bdev1", 00:21:48.145 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:48.145 "strip_size_kb": 64, 00:21:48.145 "state": "online", 00:21:48.145 "raid_level": "raid5f", 00:21:48.145 "superblock": true, 00:21:48.145 "num_base_bdevs": 3, 00:21:48.145 "num_base_bdevs_discovered": 3, 00:21:48.145 "num_base_bdevs_operational": 3, 00:21:48.145 "process": { 00:21:48.145 "type": "rebuild", 00:21:48.146 "target": "spare", 00:21:48.146 "progress": { 00:21:48.146 "blocks": 28672, 00:21:48.146 "percent": 22 00:21:48.146 } 00:21:48.146 }, 00:21:48.146 "base_bdevs_list": [ 00:21:48.146 { 00:21:48.146 "name": "spare", 00:21:48.146 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:48.146 "is_configured": true, 00:21:48.146 "data_offset": 2048, 00:21:48.146 "data_size": 63488 00:21:48.146 }, 00:21:48.146 { 00:21:48.146 "name": "BaseBdev2", 00:21:48.146 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:48.146 "is_configured": true, 00:21:48.146 "data_offset": 2048, 00:21:48.146 "data_size": 63488 00:21:48.146 }, 00:21:48.146 { 00:21:48.146 "name": "BaseBdev3", 00:21:48.146 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:48.146 "is_configured": true, 00:21:48.146 "data_offset": 2048, 00:21:48.146 "data_size": 63488 00:21:48.146 } 00:21:48.146 ] 00:21:48.146 }' 00:21:48.146 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:48.146 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.146 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:48.146 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.146 06:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:49.084 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:49.084 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.343 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:49.343 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:49.343 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:49.343 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:49.343 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.343 06:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.343 06:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:49.343 "name": "raid_bdev1", 00:21:49.343 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:49.343 "strip_size_kb": 64, 00:21:49.343 "state": "online", 00:21:49.343 "raid_level": "raid5f", 00:21:49.343 "superblock": true, 00:21:49.343 "num_base_bdevs": 3, 00:21:49.343 "num_base_bdevs_discovered": 3, 00:21:49.343 "num_base_bdevs_operational": 3, 00:21:49.343 "process": { 00:21:49.343 "type": "rebuild", 00:21:49.343 "target": "spare", 00:21:49.343 "progress": { 00:21:49.343 "blocks": 55296, 00:21:49.343 "percent": 43 00:21:49.343 } 00:21:49.343 }, 00:21:49.343 "base_bdevs_list": [ 00:21:49.343 { 00:21:49.343 "name": "spare", 00:21:49.343 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:49.343 "is_configured": true, 00:21:49.343 "data_offset": 2048, 00:21:49.343 "data_size": 63488 00:21:49.343 }, 00:21:49.343 { 00:21:49.343 "name": "BaseBdev2", 00:21:49.343 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:49.343 "is_configured": true, 00:21:49.343 "data_offset": 2048, 00:21:49.343 "data_size": 63488 00:21:49.343 }, 00:21:49.343 { 00:21:49.343 "name": "BaseBdev3", 00:21:49.343 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:49.343 "is_configured": true, 00:21:49.343 "data_offset": 2048, 00:21:49.343 "data_size": 63488 00:21:49.343 } 00:21:49.343 ] 00:21:49.343 }' 00:21:49.343 06:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:49.343 06:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.343 06:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:49.603 06:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.603 06:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:50.541 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:50.541 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.542 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:50.542 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:50.542 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:50.542 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:50.542 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.542 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.801 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:50.801 "name": "raid_bdev1", 00:21:50.801 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:50.801 "strip_size_kb": 64, 00:21:50.801 "state": "online", 00:21:50.801 "raid_level": "raid5f", 00:21:50.801 "superblock": true, 00:21:50.801 "num_base_bdevs": 3, 00:21:50.801 "num_base_bdevs_discovered": 3, 00:21:50.801 "num_base_bdevs_operational": 3, 00:21:50.801 "process": { 00:21:50.801 "type": "rebuild", 00:21:50.801 "target": "spare", 00:21:50.801 "progress": { 00:21:50.801 "blocks": 81920, 00:21:50.801 "percent": 64 00:21:50.801 } 00:21:50.801 }, 00:21:50.801 "base_bdevs_list": [ 00:21:50.801 { 00:21:50.801 "name": "spare", 00:21:50.801 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:50.801 "is_configured": true, 00:21:50.801 "data_offset": 2048, 00:21:50.801 "data_size": 63488 00:21:50.801 }, 00:21:50.801 { 00:21:50.801 "name": "BaseBdev2", 00:21:50.801 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:50.801 "is_configured": true, 00:21:50.801 "data_offset": 2048, 00:21:50.801 "data_size": 63488 00:21:50.801 }, 00:21:50.801 { 00:21:50.801 "name": "BaseBdev3", 00:21:50.801 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:50.801 "is_configured": true, 00:21:50.801 "data_offset": 2048, 00:21:50.801 "data_size": 63488 00:21:50.801 } 00:21:50.801 ] 00:21:50.801 }' 00:21:50.801 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:50.801 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.801 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:50.801 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.801 06:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.740 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.000 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:52.000 "name": "raid_bdev1", 00:21:52.000 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:52.000 "strip_size_kb": 64, 00:21:52.000 "state": "online", 00:21:52.000 "raid_level": "raid5f", 00:21:52.000 "superblock": true, 00:21:52.000 "num_base_bdevs": 3, 00:21:52.000 "num_base_bdevs_discovered": 3, 00:21:52.000 "num_base_bdevs_operational": 3, 00:21:52.000 "process": { 00:21:52.000 "type": "rebuild", 00:21:52.000 "target": "spare", 00:21:52.000 "progress": { 00:21:52.000 "blocks": 108544, 00:21:52.000 "percent": 85 00:21:52.000 } 00:21:52.000 }, 00:21:52.000 "base_bdevs_list": [ 00:21:52.000 { 00:21:52.000 "name": "spare", 00:21:52.000 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:52.000 "is_configured": true, 00:21:52.000 "data_offset": 2048, 00:21:52.000 "data_size": 63488 00:21:52.000 }, 00:21:52.000 { 00:21:52.000 "name": "BaseBdev2", 00:21:52.000 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:52.000 "is_configured": true, 00:21:52.000 "data_offset": 2048, 00:21:52.000 "data_size": 63488 00:21:52.000 }, 00:21:52.000 { 00:21:52.000 "name": "BaseBdev3", 00:21:52.000 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:52.000 "is_configured": true, 00:21:52.000 "data_offset": 2048, 00:21:52.000 "data_size": 63488 00:21:52.000 } 00:21:52.000 ] 00:21:52.000 }' 00:21:52.000 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:52.000 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.000 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:52.000 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.000 06:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:52.937 [2024-08-13 06:16:54.490821] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:52.937 [2024-08-13 06:16:54.490899] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:52.937 [2024-08-13 06:16:54.490998] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:53.207 "name": "raid_bdev1", 00:21:53.207 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:53.207 "strip_size_kb": 64, 00:21:53.207 "state": "online", 00:21:53.207 "raid_level": "raid5f", 00:21:53.207 "superblock": true, 00:21:53.207 "num_base_bdevs": 3, 00:21:53.207 "num_base_bdevs_discovered": 3, 00:21:53.207 "num_base_bdevs_operational": 3, 00:21:53.207 "base_bdevs_list": [ 00:21:53.207 { 00:21:53.207 "name": "spare", 00:21:53.207 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:53.207 "is_configured": true, 00:21:53.207 "data_offset": 2048, 00:21:53.207 "data_size": 63488 00:21:53.207 }, 00:21:53.207 { 00:21:53.207 "name": "BaseBdev2", 00:21:53.207 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:53.207 "is_configured": true, 00:21:53.207 "data_offset": 2048, 00:21:53.207 "data_size": 63488 00:21:53.207 }, 00:21:53.207 { 00:21:53.207 "name": "BaseBdev3", 00:21:53.207 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:53.207 "is_configured": true, 00:21:53.207 "data_offset": 2048, 00:21:53.207 "data_size": 63488 00:21:53.207 } 00:21:53.207 ] 00:21:53.207 }' 00:21:53.207 06:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:53.479 "name": "raid_bdev1", 00:21:53.479 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:53.479 "strip_size_kb": 64, 00:21:53.479 "state": "online", 00:21:53.479 "raid_level": "raid5f", 00:21:53.479 "superblock": true, 00:21:53.479 "num_base_bdevs": 3, 00:21:53.479 "num_base_bdevs_discovered": 3, 00:21:53.479 "num_base_bdevs_operational": 3, 00:21:53.479 "base_bdevs_list": [ 00:21:53.479 { 00:21:53.479 "name": "spare", 00:21:53.479 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:53.479 "is_configured": true, 00:21:53.479 "data_offset": 2048, 00:21:53.479 "data_size": 63488 00:21:53.479 }, 00:21:53.479 { 00:21:53.479 "name": "BaseBdev2", 00:21:53.479 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:53.479 "is_configured": true, 00:21:53.479 "data_offset": 2048, 00:21:53.479 "data_size": 63488 00:21:53.479 }, 00:21:53.479 { 00:21:53.479 "name": "BaseBdev3", 00:21:53.479 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:53.479 "is_configured": true, 00:21:53.479 "data_offset": 2048, 00:21:53.479 "data_size": 63488 00:21:53.479 } 00:21:53.479 ] 00:21:53.479 }' 00:21:53.479 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.739 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.999 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.999 "name": "raid_bdev1", 00:21:53.999 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:53.999 "strip_size_kb": 64, 00:21:53.999 "state": "online", 00:21:53.999 "raid_level": "raid5f", 00:21:53.999 "superblock": true, 00:21:53.999 "num_base_bdevs": 3, 00:21:53.999 "num_base_bdevs_discovered": 3, 00:21:53.999 "num_base_bdevs_operational": 3, 00:21:53.999 "base_bdevs_list": [ 00:21:53.999 { 00:21:53.999 "name": "spare", 00:21:53.999 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:53.999 "is_configured": true, 00:21:53.999 "data_offset": 2048, 00:21:53.999 "data_size": 63488 00:21:53.999 }, 00:21:53.999 { 00:21:53.999 "name": "BaseBdev2", 00:21:53.999 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:53.999 "is_configured": true, 00:21:53.999 "data_offset": 2048, 00:21:53.999 "data_size": 63488 00:21:53.999 }, 00:21:53.999 { 00:21:53.999 "name": "BaseBdev3", 00:21:53.999 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:53.999 "is_configured": true, 00:21:53.999 "data_offset": 2048, 00:21:53.999 "data_size": 63488 00:21:53.999 } 00:21:53.999 ] 00:21:53.999 }' 00:21:53.999 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.999 06:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.569 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:54.569 [2024-08-13 06:16:56.336385] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.569 [2024-08-13 06:16:56.336419] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:54.569 [2024-08-13 06:16:56.336504] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.569 [2024-08-13 06:16:56.336577] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.569 [2024-08-13 06:16:56.336591] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.829 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:55.089 /dev/nbd0 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:55.089 1+0 records in 00:21:55.089 1+0 records out 00:21:55.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472495 s, 8.7 MB/s 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.089 06:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:55.349 /dev/nbd1 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:55.349 1+0 records in 00:21:55.349 1+0 records out 00:21:55.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494916 s, 8.3 MB/s 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.349 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.609 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:21:55.869 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:56.128 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:56.128 [2024-08-13 06:16:57.874823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:56.128 [2024-08-13 06:16:57.874879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.129 [2024-08-13 06:16:57.874897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:56.129 [2024-08-13 06:16:57.874907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.129 [2024-08-13 06:16:57.876923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.129 [2024-08-13 06:16:57.876965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:56.129 [2024-08-13 06:16:57.877043] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:56.129 [2024-08-13 06:16:57.877141] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:56.129 [2024-08-13 06:16:57.877256] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.129 [2024-08-13 06:16:57.877354] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:56.129 spare 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.129 06:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.389 [2024-08-13 06:16:57.977246] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:21:56.389 [2024-08-13 06:16:57.977276] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:56.389 [2024-08-13 06:16:57.977505] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:21:56.389 [2024-08-13 06:16:57.977881] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:21:56.389 [2024-08-13 06:16:57.977900] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:21:56.389 [2024-08-13 06:16:57.977998] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.389 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.389 "name": "raid_bdev1", 00:21:56.389 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:56.389 "strip_size_kb": 64, 00:21:56.389 "state": "online", 00:21:56.389 "raid_level": "raid5f", 00:21:56.389 "superblock": true, 00:21:56.389 "num_base_bdevs": 3, 00:21:56.389 "num_base_bdevs_discovered": 3, 00:21:56.389 "num_base_bdevs_operational": 3, 00:21:56.389 "base_bdevs_list": [ 00:21:56.389 { 00:21:56.389 "name": "spare", 00:21:56.389 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:56.389 "is_configured": true, 00:21:56.389 "data_offset": 2048, 00:21:56.389 "data_size": 63488 00:21:56.389 }, 00:21:56.389 { 00:21:56.389 "name": "BaseBdev2", 00:21:56.389 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:56.389 "is_configured": true, 00:21:56.389 "data_offset": 2048, 00:21:56.389 "data_size": 63488 00:21:56.389 }, 00:21:56.389 { 00:21:56.389 "name": "BaseBdev3", 00:21:56.389 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:56.389 "is_configured": true, 00:21:56.389 "data_offset": 2048, 00:21:56.389 "data_size": 63488 00:21:56.389 } 00:21:56.389 ] 00:21:56.389 }' 00:21:56.389 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.389 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.958 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:57.218 "name": "raid_bdev1", 00:21:57.218 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:57.218 "strip_size_kb": 64, 00:21:57.218 "state": "online", 00:21:57.218 "raid_level": "raid5f", 00:21:57.218 "superblock": true, 00:21:57.218 "num_base_bdevs": 3, 00:21:57.218 "num_base_bdevs_discovered": 3, 00:21:57.218 "num_base_bdevs_operational": 3, 00:21:57.218 "base_bdevs_list": [ 00:21:57.218 { 00:21:57.218 "name": "spare", 00:21:57.218 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:21:57.218 "is_configured": true, 00:21:57.218 "data_offset": 2048, 00:21:57.218 "data_size": 63488 00:21:57.218 }, 00:21:57.218 { 00:21:57.218 "name": "BaseBdev2", 00:21:57.218 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:57.218 "is_configured": true, 00:21:57.218 "data_offset": 2048, 00:21:57.218 "data_size": 63488 00:21:57.218 }, 00:21:57.218 { 00:21:57.218 "name": "BaseBdev3", 00:21:57.218 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:57.218 "is_configured": true, 00:21:57.218 "data_offset": 2048, 00:21:57.218 "data_size": 63488 00:21:57.218 } 00:21:57.218 ] 00:21:57.218 }' 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.218 06:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:57.478 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.478 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:57.738 [2024-08-13 06:16:59.389082] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.738 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.997 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.998 "name": "raid_bdev1", 00:21:57.998 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:21:57.998 "strip_size_kb": 64, 00:21:57.998 "state": "online", 00:21:57.998 "raid_level": "raid5f", 00:21:57.998 "superblock": true, 00:21:57.998 "num_base_bdevs": 3, 00:21:57.998 "num_base_bdevs_discovered": 2, 00:21:57.998 "num_base_bdevs_operational": 2, 00:21:57.998 "base_bdevs_list": [ 00:21:57.998 { 00:21:57.998 "name": null, 00:21:57.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.998 "is_configured": false, 00:21:57.998 "data_offset": 2048, 00:21:57.998 "data_size": 63488 00:21:57.998 }, 00:21:57.998 { 00:21:57.998 "name": "BaseBdev2", 00:21:57.998 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:21:57.998 "is_configured": true, 00:21:57.998 "data_offset": 2048, 00:21:57.998 "data_size": 63488 00:21:57.998 }, 00:21:57.998 { 00:21:57.998 "name": "BaseBdev3", 00:21:57.998 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:21:57.998 "is_configured": true, 00:21:57.998 "data_offset": 2048, 00:21:57.998 "data_size": 63488 00:21:57.998 } 00:21:57.998 ] 00:21:57.998 }' 00:21:57.998 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.998 06:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.567 06:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:58.827 [2024-08-13 06:17:00.383322] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.827 [2024-08-13 06:17:00.383528] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:58.827 [2024-08-13 06:17:00.383548] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:58.827 [2024-08-13 06:17:00.383602] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.827 [2024-08-13 06:17:00.387212] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:21:58.827 [2024-08-13 06:17:00.389109] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:58.827 06:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.766 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.026 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:00.026 "name": "raid_bdev1", 00:22:00.026 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:00.026 "strip_size_kb": 64, 00:22:00.026 "state": "online", 00:22:00.026 "raid_level": "raid5f", 00:22:00.026 "superblock": true, 00:22:00.026 "num_base_bdevs": 3, 00:22:00.026 "num_base_bdevs_discovered": 3, 00:22:00.026 "num_base_bdevs_operational": 3, 00:22:00.026 "process": { 00:22:00.026 "type": "rebuild", 00:22:00.026 "target": "spare", 00:22:00.026 "progress": { 00:22:00.026 "blocks": 24576, 00:22:00.026 "percent": 19 00:22:00.026 } 00:22:00.026 }, 00:22:00.026 "base_bdevs_list": [ 00:22:00.026 { 00:22:00.026 "name": "spare", 00:22:00.026 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:22:00.026 "is_configured": true, 00:22:00.026 "data_offset": 2048, 00:22:00.026 "data_size": 63488 00:22:00.026 }, 00:22:00.026 { 00:22:00.026 "name": "BaseBdev2", 00:22:00.026 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:00.026 "is_configured": true, 00:22:00.026 "data_offset": 2048, 00:22:00.026 "data_size": 63488 00:22:00.026 }, 00:22:00.026 { 00:22:00.026 "name": "BaseBdev3", 00:22:00.026 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:00.026 "is_configured": true, 00:22:00.026 "data_offset": 2048, 00:22:00.026 "data_size": 63488 00:22:00.026 } 00:22:00.026 ] 00:22:00.026 }' 00:22:00.026 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:00.026 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.026 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:00.026 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.026 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:00.286 [2024-08-13 06:17:01.883290] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.286 [2024-08-13 06:17:01.897131] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:00.286 [2024-08-13 06:17:01.897183] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.286 [2024-08-13 06:17:01.897198] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.286 [2024-08-13 06:17:01.897206] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.286 06:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.546 06:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.546 "name": "raid_bdev1", 00:22:00.546 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:00.546 "strip_size_kb": 64, 00:22:00.546 "state": "online", 00:22:00.546 "raid_level": "raid5f", 00:22:00.546 "superblock": true, 00:22:00.546 "num_base_bdevs": 3, 00:22:00.546 "num_base_bdevs_discovered": 2, 00:22:00.546 "num_base_bdevs_operational": 2, 00:22:00.546 "base_bdevs_list": [ 00:22:00.546 { 00:22:00.546 "name": null, 00:22:00.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.546 "is_configured": false, 00:22:00.546 "data_offset": 2048, 00:22:00.546 "data_size": 63488 00:22:00.546 }, 00:22:00.546 { 00:22:00.546 "name": "BaseBdev2", 00:22:00.546 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:00.546 "is_configured": true, 00:22:00.546 "data_offset": 2048, 00:22:00.546 "data_size": 63488 00:22:00.546 }, 00:22:00.546 { 00:22:00.546 "name": "BaseBdev3", 00:22:00.546 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:00.546 "is_configured": true, 00:22:00.546 "data_offset": 2048, 00:22:00.546 "data_size": 63488 00:22:00.546 } 00:22:00.546 ] 00:22:00.546 }' 00:22:00.546 06:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.546 06:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.115 06:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:01.115 [2024-08-13 06:17:02.880410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.115 [2024-08-13 06:17:02.880464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.115 [2024-08-13 06:17:02.880480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:01.115 [2024-08-13 06:17:02.880490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.115 [2024-08-13 06:17:02.880853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.115 [2024-08-13 06:17:02.880882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.115 [2024-08-13 06:17:02.880948] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:01.115 [2024-08-13 06:17:02.880960] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:01.115 [2024-08-13 06:17:02.880969] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:01.115 [2024-08-13 06:17:02.880994] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:01.115 [2024-08-13 06:17:02.884205] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:22:01.115 [2024-08-13 06:17:02.886187] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.116 spare 00:22:01.375 06:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.316 06:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.575 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:02.575 "name": "raid_bdev1", 00:22:02.575 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:02.575 "strip_size_kb": 64, 00:22:02.575 "state": "online", 00:22:02.575 "raid_level": "raid5f", 00:22:02.575 "superblock": true, 00:22:02.575 "num_base_bdevs": 3, 00:22:02.575 "num_base_bdevs_discovered": 3, 00:22:02.575 "num_base_bdevs_operational": 3, 00:22:02.575 "process": { 00:22:02.575 "type": "rebuild", 00:22:02.575 "target": "spare", 00:22:02.575 "progress": { 00:22:02.575 "blocks": 24576, 00:22:02.575 "percent": 19 00:22:02.575 } 00:22:02.575 }, 00:22:02.575 "base_bdevs_list": [ 00:22:02.575 { 00:22:02.575 "name": "spare", 00:22:02.575 "uuid": "acfcc486-3168-52c6-bf04-dac1caaeedee", 00:22:02.575 "is_configured": true, 00:22:02.575 "data_offset": 2048, 00:22:02.575 "data_size": 63488 00:22:02.575 }, 00:22:02.575 { 00:22:02.575 "name": "BaseBdev2", 00:22:02.575 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:02.575 "is_configured": true, 00:22:02.575 "data_offset": 2048, 00:22:02.575 "data_size": 63488 00:22:02.575 }, 00:22:02.575 { 00:22:02.575 "name": "BaseBdev3", 00:22:02.575 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:02.575 "is_configured": true, 00:22:02.575 "data_offset": 2048, 00:22:02.575 "data_size": 63488 00:22:02.575 } 00:22:02.575 ] 00:22:02.575 }' 00:22:02.575 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:02.575 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.575 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:02.575 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.575 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:02.835 [2024-08-13 06:17:04.413268] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:02.835 [2024-08-13 06:17:04.494610] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:02.835 [2024-08-13 06:17:04.494660] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.835 [2024-08-13 06:17:04.494677] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:02.835 [2024-08-13 06:17:04.494684] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.835 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.095 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.095 "name": "raid_bdev1", 00:22:03.095 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:03.095 "strip_size_kb": 64, 00:22:03.095 "state": "online", 00:22:03.095 "raid_level": "raid5f", 00:22:03.095 "superblock": true, 00:22:03.095 "num_base_bdevs": 3, 00:22:03.095 "num_base_bdevs_discovered": 2, 00:22:03.095 "num_base_bdevs_operational": 2, 00:22:03.095 "base_bdevs_list": [ 00:22:03.095 { 00:22:03.095 "name": null, 00:22:03.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.095 "is_configured": false, 00:22:03.095 "data_offset": 2048, 00:22:03.095 "data_size": 63488 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "name": "BaseBdev2", 00:22:03.095 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:03.095 "is_configured": true, 00:22:03.095 "data_offset": 2048, 00:22:03.095 "data_size": 63488 00:22:03.095 }, 00:22:03.095 { 00:22:03.095 "name": "BaseBdev3", 00:22:03.095 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:03.095 "is_configured": true, 00:22:03.095 "data_offset": 2048, 00:22:03.095 "data_size": 63488 00:22:03.095 } 00:22:03.095 ] 00:22:03.095 }' 00:22:03.095 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.095 06:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.664 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.924 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.924 "name": "raid_bdev1", 00:22:03.924 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:03.924 "strip_size_kb": 64, 00:22:03.924 "state": "online", 00:22:03.924 "raid_level": "raid5f", 00:22:03.924 "superblock": true, 00:22:03.924 "num_base_bdevs": 3, 00:22:03.924 "num_base_bdevs_discovered": 2, 00:22:03.924 "num_base_bdevs_operational": 2, 00:22:03.924 "base_bdevs_list": [ 00:22:03.924 { 00:22:03.924 "name": null, 00:22:03.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.924 "is_configured": false, 00:22:03.924 "data_offset": 2048, 00:22:03.924 "data_size": 63488 00:22:03.924 }, 00:22:03.924 { 00:22:03.924 "name": "BaseBdev2", 00:22:03.924 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:03.924 "is_configured": true, 00:22:03.924 "data_offset": 2048, 00:22:03.924 "data_size": 63488 00:22:03.924 }, 00:22:03.924 { 00:22:03.924 "name": "BaseBdev3", 00:22:03.924 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:03.924 "is_configured": true, 00:22:03.924 "data_offset": 2048, 00:22:03.924 "data_size": 63488 00:22:03.924 } 00:22:03.924 ] 00:22:03.924 }' 00:22:03.924 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.924 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:03.924 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.924 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:03.924 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:04.184 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:04.185 [2024-08-13 06:17:05.925317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:04.185 [2024-08-13 06:17:05.925370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.185 [2024-08-13 06:17:05.925394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:04.185 [2024-08-13 06:17:05.925405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.185 [2024-08-13 06:17:05.925772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.185 [2024-08-13 06:17:05.925796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:04.185 [2024-08-13 06:17:05.925864] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:04.185 [2024-08-13 06:17:05.925877] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:04.185 [2024-08-13 06:17:05.925899] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:04.185 BaseBdev1 00:22:04.185 06:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.564 06:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.564 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.564 "name": "raid_bdev1", 00:22:05.564 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:05.564 "strip_size_kb": 64, 00:22:05.564 "state": "online", 00:22:05.564 "raid_level": "raid5f", 00:22:05.564 "superblock": true, 00:22:05.564 "num_base_bdevs": 3, 00:22:05.564 "num_base_bdevs_discovered": 2, 00:22:05.564 "num_base_bdevs_operational": 2, 00:22:05.564 "base_bdevs_list": [ 00:22:05.564 { 00:22:05.564 "name": null, 00:22:05.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.564 "is_configured": false, 00:22:05.564 "data_offset": 2048, 00:22:05.564 "data_size": 63488 00:22:05.564 }, 00:22:05.564 { 00:22:05.564 "name": "BaseBdev2", 00:22:05.564 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:05.564 "is_configured": true, 00:22:05.564 "data_offset": 2048, 00:22:05.564 "data_size": 63488 00:22:05.564 }, 00:22:05.564 { 00:22:05.564 "name": "BaseBdev3", 00:22:05.564 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:05.564 "is_configured": true, 00:22:05.564 "data_offset": 2048, 00:22:05.564 "data_size": 63488 00:22:05.564 } 00:22:05.564 ] 00:22:05.564 }' 00:22:05.564 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.564 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.133 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.133 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:06.133 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:06.133 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:06.133 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:06.133 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.134 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.134 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:06.134 "name": "raid_bdev1", 00:22:06.134 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:06.134 "strip_size_kb": 64, 00:22:06.134 "state": "online", 00:22:06.134 "raid_level": "raid5f", 00:22:06.134 "superblock": true, 00:22:06.134 "num_base_bdevs": 3, 00:22:06.134 "num_base_bdevs_discovered": 2, 00:22:06.134 "num_base_bdevs_operational": 2, 00:22:06.134 "base_bdevs_list": [ 00:22:06.134 { 00:22:06.134 "name": null, 00:22:06.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.134 "is_configured": false, 00:22:06.134 "data_offset": 2048, 00:22:06.134 "data_size": 63488 00:22:06.134 }, 00:22:06.134 { 00:22:06.134 "name": "BaseBdev2", 00:22:06.134 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:06.134 "is_configured": true, 00:22:06.134 "data_offset": 2048, 00:22:06.134 "data_size": 63488 00:22:06.134 }, 00:22:06.134 { 00:22:06.134 "name": "BaseBdev3", 00:22:06.134 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:06.134 "is_configured": true, 00:22:06.134 "data_offset": 2048, 00:22:06.134 "data_size": 63488 00:22:06.134 } 00:22:06.134 ] 00:22:06.134 }' 00:22:06.134 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:06.134 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:06.134 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:06.394 06:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:06.394 [2024-08-13 06:17:08.149566] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.394 [2024-08-13 06:17:08.149720] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:06.394 [2024-08-13 06:17:08.149734] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:06.394 request: 00:22:06.394 { 00:22:06.394 "base_bdev": "BaseBdev1", 00:22:06.394 "raid_bdev": "raid_bdev1", 00:22:06.394 "method": "bdev_raid_add_base_bdev", 00:22:06.394 "req_id": 1 00:22:06.394 } 00:22:06.394 Got JSON-RPC error response 00:22:06.394 response: 00:22:06.394 { 00:22:06.394 "code": -22, 00:22:06.394 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:06.394 } 00:22:06.394 06:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:22:06.394 06:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:22:06.394 06:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:22:06.394 06:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:22:06.394 06:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.774 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.774 "name": "raid_bdev1", 00:22:07.774 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:07.774 "strip_size_kb": 64, 00:22:07.774 "state": "online", 00:22:07.774 "raid_level": "raid5f", 00:22:07.774 "superblock": true, 00:22:07.774 "num_base_bdevs": 3, 00:22:07.774 "num_base_bdevs_discovered": 2, 00:22:07.774 "num_base_bdevs_operational": 2, 00:22:07.774 "base_bdevs_list": [ 00:22:07.774 { 00:22:07.774 "name": null, 00:22:07.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.774 "is_configured": false, 00:22:07.774 "data_offset": 2048, 00:22:07.775 "data_size": 63488 00:22:07.775 }, 00:22:07.775 { 00:22:07.775 "name": "BaseBdev2", 00:22:07.775 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:07.775 "is_configured": true, 00:22:07.775 "data_offset": 2048, 00:22:07.775 "data_size": 63488 00:22:07.775 }, 00:22:07.775 { 00:22:07.775 "name": "BaseBdev3", 00:22:07.775 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:07.775 "is_configured": true, 00:22:07.775 "data_offset": 2048, 00:22:07.775 "data_size": 63488 00:22:07.775 } 00:22:07.775 ] 00:22:07.775 }' 00:22:07.775 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.775 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.342 06:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.601 "name": "raid_bdev1", 00:22:08.601 "uuid": "26fd2a1a-5f9b-4888-8fbc-1dc00a0daf0a", 00:22:08.601 "strip_size_kb": 64, 00:22:08.601 "state": "online", 00:22:08.601 "raid_level": "raid5f", 00:22:08.601 "superblock": true, 00:22:08.601 "num_base_bdevs": 3, 00:22:08.601 "num_base_bdevs_discovered": 2, 00:22:08.601 "num_base_bdevs_operational": 2, 00:22:08.601 "base_bdevs_list": [ 00:22:08.601 { 00:22:08.601 "name": null, 00:22:08.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.601 "is_configured": false, 00:22:08.601 "data_offset": 2048, 00:22:08.601 "data_size": 63488 00:22:08.601 }, 00:22:08.601 { 00:22:08.601 "name": "BaseBdev2", 00:22:08.601 "uuid": "32f6b597-35dc-5c7f-b7d2-63af1acd5720", 00:22:08.601 "is_configured": true, 00:22:08.601 "data_offset": 2048, 00:22:08.601 "data_size": 63488 00:22:08.601 }, 00:22:08.601 { 00:22:08.601 "name": "BaseBdev3", 00:22:08.601 "uuid": "7f25d0bc-999e-5d9c-94f0-2189f065575c", 00:22:08.601 "is_configured": true, 00:22:08.601 "data_offset": 2048, 00:22:08.601 "data_size": 63488 00:22:08.601 } 00:22:08.601 ] 00:22:08.601 }' 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 100163 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 100163 ']' 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 100163 00:22:08.601 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100163 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:08.602 killing process with pid 100163 00:22:08.602 Received shutdown signal, test time was about 60.000000 seconds 00:22:08.602 00:22:08.602 Latency(us) 00:22:08.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.602 =================================================================================================================== 00:22:08.602 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100163' 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 100163 00:22:08.602 [2024-08-13 06:17:10.249086] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:08.602 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 100163 00:22:08.602 [2024-08-13 06:17:10.249225] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.602 [2024-08-13 06:17:10.249285] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.602 [2024-08-13 06:17:10.249295] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:22:08.602 [2024-08-13 06:17:10.290407] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:08.862 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:22:08.862 00:22:08.862 real 0m30.807s 00:22:08.862 user 0m47.216s 00:22:08.862 sys 0m4.259s 00:22:08.862 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:08.862 06:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.862 ************************************ 00:22:08.862 END TEST raid5f_rebuild_test_sb 00:22:08.862 ************************************ 00:22:08.862 06:17:10 bdev_raid -- bdev/bdev_raid.sh@964 -- # for n in {3..4} 00:22:08.862 06:17:10 bdev_raid -- bdev/bdev_raid.sh@965 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:08.862 06:17:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:08.862 06:17:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:08.862 06:17:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:08.862 ************************************ 00:22:08.862 START TEST raid5f_state_function_test 00:22:08.862 ************************************ 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 false 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=101008 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 101008' 00:22:08.862 Process raid pid: 101008 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 101008 /var/tmp/spdk-raid.sock 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 101008 ']' 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:08.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:08.862 06:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.122 [2024-08-13 06:17:10.697163] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:22:09.122 [2024-08-13 06:17:10.697291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.122 [2024-08-13 06:17:10.843340] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.122 [2024-08-13 06:17:10.888490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.382 [2024-08-13 06:17:10.931749] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:09.382 [2024-08-13 06:17:10.931782] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:09.952 [2024-08-13 06:17:11.699619] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.952 [2024-08-13 06:17:11.699667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.952 [2024-08-13 06:17:11.699685] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.952 [2024-08-13 06:17:11.699693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.952 [2024-08-13 06:17:11.699702] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:09.952 [2024-08-13 06:17:11.699709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:09.952 [2024-08-13 06:17:11.699717] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:09.952 [2024-08-13 06:17:11.699724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.952 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.211 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.211 "name": "Existed_Raid", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "strip_size_kb": 64, 00:22:10.211 "state": "configuring", 00:22:10.211 "raid_level": "raid5f", 00:22:10.211 "superblock": false, 00:22:10.211 "num_base_bdevs": 4, 00:22:10.211 "num_base_bdevs_discovered": 0, 00:22:10.211 "num_base_bdevs_operational": 4, 00:22:10.211 "base_bdevs_list": [ 00:22:10.211 { 00:22:10.211 "name": "BaseBdev1", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "is_configured": false, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 0 00:22:10.211 }, 00:22:10.211 { 00:22:10.211 "name": "BaseBdev2", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "is_configured": false, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 0 00:22:10.211 }, 00:22:10.211 { 00:22:10.211 "name": "BaseBdev3", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "is_configured": false, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 0 00:22:10.211 }, 00:22:10.211 { 00:22:10.211 "name": "BaseBdev4", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "is_configured": false, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 0 00:22:10.211 } 00:22:10.211 ] 00:22:10.211 }' 00:22:10.212 06:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.212 06:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.781 06:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:11.041 [2024-08-13 06:17:12.626001] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:11.041 [2024-08-13 06:17:12.626047] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:22:11.041 06:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:11.041 [2024-08-13 06:17:12.817631] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.041 [2024-08-13 06:17:12.817682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.041 [2024-08-13 06:17:12.817692] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:11.041 [2024-08-13 06:17:12.817698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:11.041 [2024-08-13 06:17:12.817706] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:11.041 [2024-08-13 06:17:12.817712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:11.041 [2024-08-13 06:17:12.817719] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:11.041 [2024-08-13 06:17:12.817725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:11.301 06:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:11.301 [2024-08-13 06:17:13.002292] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.301 BaseBdev1 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:11.301 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:11.561 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:11.821 [ 00:22:11.821 { 00:22:11.821 "name": "BaseBdev1", 00:22:11.821 "aliases": [ 00:22:11.821 "6c3eb2f5-1671-4719-87e1-b942a4502cb5" 00:22:11.821 ], 00:22:11.821 "product_name": "Malloc disk", 00:22:11.821 "block_size": 512, 00:22:11.821 "num_blocks": 65536, 00:22:11.821 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:11.821 "assigned_rate_limits": { 00:22:11.821 "rw_ios_per_sec": 0, 00:22:11.821 "rw_mbytes_per_sec": 0, 00:22:11.821 "r_mbytes_per_sec": 0, 00:22:11.821 "w_mbytes_per_sec": 0 00:22:11.821 }, 00:22:11.821 "claimed": true, 00:22:11.821 "claim_type": "exclusive_write", 00:22:11.821 "zoned": false, 00:22:11.821 "supported_io_types": { 00:22:11.821 "read": true, 00:22:11.821 "write": true, 00:22:11.821 "unmap": true, 00:22:11.821 "flush": true, 00:22:11.821 "reset": true, 00:22:11.821 "nvme_admin": false, 00:22:11.821 "nvme_io": false, 00:22:11.821 "nvme_io_md": false, 00:22:11.821 "write_zeroes": true, 00:22:11.821 "zcopy": true, 00:22:11.821 "get_zone_info": false, 00:22:11.821 "zone_management": false, 00:22:11.821 "zone_append": false, 00:22:11.821 "compare": false, 00:22:11.821 "compare_and_write": false, 00:22:11.821 "abort": true, 00:22:11.821 "seek_hole": false, 00:22:11.821 "seek_data": false, 00:22:11.821 "copy": true, 00:22:11.821 "nvme_iov_md": false 00:22:11.821 }, 00:22:11.821 "memory_domains": [ 00:22:11.821 { 00:22:11.821 "dma_device_id": "system", 00:22:11.821 "dma_device_type": 1 00:22:11.821 }, 00:22:11.821 { 00:22:11.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.821 "dma_device_type": 2 00:22:11.821 } 00:22:11.821 ], 00:22:11.821 "driver_specific": {} 00:22:11.821 } 00:22:11.821 ] 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.821 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.084 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.084 "name": "Existed_Raid", 00:22:12.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.084 "strip_size_kb": 64, 00:22:12.084 "state": "configuring", 00:22:12.084 "raid_level": "raid5f", 00:22:12.084 "superblock": false, 00:22:12.084 "num_base_bdevs": 4, 00:22:12.084 "num_base_bdevs_discovered": 1, 00:22:12.084 "num_base_bdevs_operational": 4, 00:22:12.084 "base_bdevs_list": [ 00:22:12.084 { 00:22:12.084 "name": "BaseBdev1", 00:22:12.085 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:12.085 "is_configured": true, 00:22:12.085 "data_offset": 0, 00:22:12.085 "data_size": 65536 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "name": "BaseBdev2", 00:22:12.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.085 "is_configured": false, 00:22:12.085 "data_offset": 0, 00:22:12.085 "data_size": 0 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "name": "BaseBdev3", 00:22:12.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.085 "is_configured": false, 00:22:12.085 "data_offset": 0, 00:22:12.085 "data_size": 0 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "name": "BaseBdev4", 00:22:12.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.085 "is_configured": false, 00:22:12.085 "data_offset": 0, 00:22:12.085 "data_size": 0 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }' 00:22:12.085 06:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.085 06:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.653 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:12.653 [2024-08-13 06:17:14.332140] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.653 [2024-08-13 06:17:14.332198] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:22:12.653 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:12.913 [2024-08-13 06:17:14.535830] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.913 [2024-08-13 06:17:14.537537] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.913 [2024-08-13 06:17:14.537574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.913 [2024-08-13 06:17:14.537588] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.913 [2024-08-13 06:17:14.537595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.913 [2024-08-13 06:17:14.537602] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:12.913 [2024-08-13 06:17:14.537608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:12.913 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.914 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.173 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.173 "name": "Existed_Raid", 00:22:13.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.173 "strip_size_kb": 64, 00:22:13.173 "state": "configuring", 00:22:13.173 "raid_level": "raid5f", 00:22:13.173 "superblock": false, 00:22:13.173 "num_base_bdevs": 4, 00:22:13.173 "num_base_bdevs_discovered": 1, 00:22:13.173 "num_base_bdevs_operational": 4, 00:22:13.173 "base_bdevs_list": [ 00:22:13.173 { 00:22:13.173 "name": "BaseBdev1", 00:22:13.173 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:13.174 "is_configured": true, 00:22:13.174 "data_offset": 0, 00:22:13.174 "data_size": 65536 00:22:13.174 }, 00:22:13.174 { 00:22:13.174 "name": "BaseBdev2", 00:22:13.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.174 "is_configured": false, 00:22:13.174 "data_offset": 0, 00:22:13.174 "data_size": 0 00:22:13.174 }, 00:22:13.174 { 00:22:13.174 "name": "BaseBdev3", 00:22:13.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.174 "is_configured": false, 00:22:13.174 "data_offset": 0, 00:22:13.174 "data_size": 0 00:22:13.174 }, 00:22:13.174 { 00:22:13.174 "name": "BaseBdev4", 00:22:13.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.174 "is_configured": false, 00:22:13.174 "data_offset": 0, 00:22:13.174 "data_size": 0 00:22:13.174 } 00:22:13.174 ] 00:22:13.174 }' 00:22:13.174 06:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.174 06:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:13.743 [2024-08-13 06:17:15.443614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.743 BaseBdev2 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:13.743 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.003 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:14.263 [ 00:22:14.263 { 00:22:14.263 "name": "BaseBdev2", 00:22:14.263 "aliases": [ 00:22:14.263 "d3a818c4-2a04-4136-8403-f406d03746ef" 00:22:14.263 ], 00:22:14.263 "product_name": "Malloc disk", 00:22:14.263 "block_size": 512, 00:22:14.263 "num_blocks": 65536, 00:22:14.263 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:14.263 "assigned_rate_limits": { 00:22:14.263 "rw_ios_per_sec": 0, 00:22:14.263 "rw_mbytes_per_sec": 0, 00:22:14.263 "r_mbytes_per_sec": 0, 00:22:14.263 "w_mbytes_per_sec": 0 00:22:14.263 }, 00:22:14.263 "claimed": true, 00:22:14.263 "claim_type": "exclusive_write", 00:22:14.263 "zoned": false, 00:22:14.263 "supported_io_types": { 00:22:14.263 "read": true, 00:22:14.263 "write": true, 00:22:14.263 "unmap": true, 00:22:14.263 "flush": true, 00:22:14.263 "reset": true, 00:22:14.263 "nvme_admin": false, 00:22:14.263 "nvme_io": false, 00:22:14.263 "nvme_io_md": false, 00:22:14.263 "write_zeroes": true, 00:22:14.263 "zcopy": true, 00:22:14.263 "get_zone_info": false, 00:22:14.263 "zone_management": false, 00:22:14.263 "zone_append": false, 00:22:14.263 "compare": false, 00:22:14.263 "compare_and_write": false, 00:22:14.263 "abort": true, 00:22:14.263 "seek_hole": false, 00:22:14.263 "seek_data": false, 00:22:14.263 "copy": true, 00:22:14.263 "nvme_iov_md": false 00:22:14.263 }, 00:22:14.263 "memory_domains": [ 00:22:14.263 { 00:22:14.263 "dma_device_id": "system", 00:22:14.263 "dma_device_type": 1 00:22:14.263 }, 00:22:14.263 { 00:22:14.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.263 "dma_device_type": 2 00:22:14.263 } 00:22:14.263 ], 00:22:14.263 "driver_specific": {} 00:22:14.263 } 00:22:14.263 ] 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.263 06:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.263 06:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.263 "name": "Existed_Raid", 00:22:14.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.263 "strip_size_kb": 64, 00:22:14.263 "state": "configuring", 00:22:14.263 "raid_level": "raid5f", 00:22:14.263 "superblock": false, 00:22:14.263 "num_base_bdevs": 4, 00:22:14.263 "num_base_bdevs_discovered": 2, 00:22:14.263 "num_base_bdevs_operational": 4, 00:22:14.263 "base_bdevs_list": [ 00:22:14.263 { 00:22:14.263 "name": "BaseBdev1", 00:22:14.263 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:14.263 "is_configured": true, 00:22:14.263 "data_offset": 0, 00:22:14.263 "data_size": 65536 00:22:14.263 }, 00:22:14.263 { 00:22:14.263 "name": "BaseBdev2", 00:22:14.263 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:14.263 "is_configured": true, 00:22:14.263 "data_offset": 0, 00:22:14.263 "data_size": 65536 00:22:14.263 }, 00:22:14.263 { 00:22:14.263 "name": "BaseBdev3", 00:22:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.264 "is_configured": false, 00:22:14.264 "data_offset": 0, 00:22:14.264 "data_size": 0 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "name": "BaseBdev4", 00:22:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.264 "is_configured": false, 00:22:14.264 "data_offset": 0, 00:22:14.264 "data_size": 0 00:22:14.264 } 00:22:14.264 ] 00:22:14.264 }' 00:22:14.264 06:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.264 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.833 06:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:15.093 [2024-08-13 06:17:16.700488] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:15.093 BaseBdev3 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:15.093 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.353 06:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:15.353 [ 00:22:15.353 { 00:22:15.353 "name": "BaseBdev3", 00:22:15.353 "aliases": [ 00:22:15.353 "0ece9c39-97cb-44ad-aa7d-0a86c0941a81" 00:22:15.353 ], 00:22:15.353 "product_name": "Malloc disk", 00:22:15.353 "block_size": 512, 00:22:15.353 "num_blocks": 65536, 00:22:15.353 "uuid": "0ece9c39-97cb-44ad-aa7d-0a86c0941a81", 00:22:15.353 "assigned_rate_limits": { 00:22:15.353 "rw_ios_per_sec": 0, 00:22:15.353 "rw_mbytes_per_sec": 0, 00:22:15.353 "r_mbytes_per_sec": 0, 00:22:15.353 "w_mbytes_per_sec": 0 00:22:15.353 }, 00:22:15.353 "claimed": true, 00:22:15.353 "claim_type": "exclusive_write", 00:22:15.353 "zoned": false, 00:22:15.353 "supported_io_types": { 00:22:15.353 "read": true, 00:22:15.353 "write": true, 00:22:15.353 "unmap": true, 00:22:15.353 "flush": true, 00:22:15.353 "reset": true, 00:22:15.353 "nvme_admin": false, 00:22:15.353 "nvme_io": false, 00:22:15.353 "nvme_io_md": false, 00:22:15.353 "write_zeroes": true, 00:22:15.353 "zcopy": true, 00:22:15.353 "get_zone_info": false, 00:22:15.353 "zone_management": false, 00:22:15.353 "zone_append": false, 00:22:15.353 "compare": false, 00:22:15.353 "compare_and_write": false, 00:22:15.353 "abort": true, 00:22:15.353 "seek_hole": false, 00:22:15.353 "seek_data": false, 00:22:15.353 "copy": true, 00:22:15.353 "nvme_iov_md": false 00:22:15.353 }, 00:22:15.353 "memory_domains": [ 00:22:15.353 { 00:22:15.353 "dma_device_id": "system", 00:22:15.353 "dma_device_type": 1 00:22:15.353 }, 00:22:15.353 { 00:22:15.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.353 "dma_device_type": 2 00:22:15.353 } 00:22:15.353 ], 00:22:15.353 "driver_specific": {} 00:22:15.353 } 00:22:15.353 ] 00:22:15.353 06:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:15.353 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:15.353 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:15.353 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:15.353 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.353 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.354 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.613 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.613 "name": "Existed_Raid", 00:22:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.613 "strip_size_kb": 64, 00:22:15.613 "state": "configuring", 00:22:15.613 "raid_level": "raid5f", 00:22:15.613 "superblock": false, 00:22:15.613 "num_base_bdevs": 4, 00:22:15.613 "num_base_bdevs_discovered": 3, 00:22:15.613 "num_base_bdevs_operational": 4, 00:22:15.614 "base_bdevs_list": [ 00:22:15.614 { 00:22:15.614 "name": "BaseBdev1", 00:22:15.614 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:15.614 "is_configured": true, 00:22:15.614 "data_offset": 0, 00:22:15.614 "data_size": 65536 00:22:15.614 }, 00:22:15.614 { 00:22:15.614 "name": "BaseBdev2", 00:22:15.614 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:15.614 "is_configured": true, 00:22:15.614 "data_offset": 0, 00:22:15.614 "data_size": 65536 00:22:15.614 }, 00:22:15.614 { 00:22:15.614 "name": "BaseBdev3", 00:22:15.614 "uuid": "0ece9c39-97cb-44ad-aa7d-0a86c0941a81", 00:22:15.614 "is_configured": true, 00:22:15.614 "data_offset": 0, 00:22:15.614 "data_size": 65536 00:22:15.614 }, 00:22:15.614 { 00:22:15.614 "name": "BaseBdev4", 00:22:15.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.614 "is_configured": false, 00:22:15.614 "data_offset": 0, 00:22:15.614 "data_size": 0 00:22:15.614 } 00:22:15.614 ] 00:22:15.614 }' 00:22:15.614 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.614 06:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.183 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:16.183 [2024-08-13 06:17:17.969257] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:16.183 [2024-08-13 06:17:17.969321] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:22:16.183 [2024-08-13 06:17:17.969337] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:16.183 [2024-08-13 06:17:17.969587] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:22:16.183 [2024-08-13 06:17:17.970025] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:22:16.183 [2024-08-13 06:17:17.970069] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:22:16.183 [2024-08-13 06:17:17.970286] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.183 BaseBdev4 00:22:16.443 06:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:16.443 06:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:16.443 06:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:16.443 06:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:16.443 06:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:16.443 06:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:16.443 06:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.443 06:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:16.711 [ 00:22:16.711 { 00:22:16.711 "name": "BaseBdev4", 00:22:16.711 "aliases": [ 00:22:16.711 "d7d5cdb7-775a-44e1-8f0e-2a680296e945" 00:22:16.711 ], 00:22:16.711 "product_name": "Malloc disk", 00:22:16.711 "block_size": 512, 00:22:16.711 "num_blocks": 65536, 00:22:16.711 "uuid": "d7d5cdb7-775a-44e1-8f0e-2a680296e945", 00:22:16.711 "assigned_rate_limits": { 00:22:16.711 "rw_ios_per_sec": 0, 00:22:16.711 "rw_mbytes_per_sec": 0, 00:22:16.711 "r_mbytes_per_sec": 0, 00:22:16.711 "w_mbytes_per_sec": 0 00:22:16.711 }, 00:22:16.711 "claimed": true, 00:22:16.711 "claim_type": "exclusive_write", 00:22:16.711 "zoned": false, 00:22:16.711 "supported_io_types": { 00:22:16.711 "read": true, 00:22:16.711 "write": true, 00:22:16.711 "unmap": true, 00:22:16.711 "flush": true, 00:22:16.711 "reset": true, 00:22:16.711 "nvme_admin": false, 00:22:16.711 "nvme_io": false, 00:22:16.711 "nvme_io_md": false, 00:22:16.711 "write_zeroes": true, 00:22:16.711 "zcopy": true, 00:22:16.711 "get_zone_info": false, 00:22:16.711 "zone_management": false, 00:22:16.711 "zone_append": false, 00:22:16.711 "compare": false, 00:22:16.711 "compare_and_write": false, 00:22:16.711 "abort": true, 00:22:16.711 "seek_hole": false, 00:22:16.711 "seek_data": false, 00:22:16.711 "copy": true, 00:22:16.711 "nvme_iov_md": false 00:22:16.711 }, 00:22:16.711 "memory_domains": [ 00:22:16.711 { 00:22:16.711 "dma_device_id": "system", 00:22:16.711 "dma_device_type": 1 00:22:16.711 }, 00:22:16.711 { 00:22:16.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.711 "dma_device_type": 2 00:22:16.711 } 00:22:16.711 ], 00:22:16.711 "driver_specific": {} 00:22:16.711 } 00:22:16.711 ] 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.711 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.990 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.990 "name": "Existed_Raid", 00:22:16.990 "uuid": "b1782826-7468-4456-b842-5c45794d93e5", 00:22:16.990 "strip_size_kb": 64, 00:22:16.990 "state": "online", 00:22:16.990 "raid_level": "raid5f", 00:22:16.990 "superblock": false, 00:22:16.990 "num_base_bdevs": 4, 00:22:16.990 "num_base_bdevs_discovered": 4, 00:22:16.990 "num_base_bdevs_operational": 4, 00:22:16.990 "base_bdevs_list": [ 00:22:16.990 { 00:22:16.990 "name": "BaseBdev1", 00:22:16.990 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:16.991 "is_configured": true, 00:22:16.991 "data_offset": 0, 00:22:16.991 "data_size": 65536 00:22:16.991 }, 00:22:16.991 { 00:22:16.991 "name": "BaseBdev2", 00:22:16.991 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:16.991 "is_configured": true, 00:22:16.991 "data_offset": 0, 00:22:16.991 "data_size": 65536 00:22:16.991 }, 00:22:16.991 { 00:22:16.991 "name": "BaseBdev3", 00:22:16.991 "uuid": "0ece9c39-97cb-44ad-aa7d-0a86c0941a81", 00:22:16.991 "is_configured": true, 00:22:16.991 "data_offset": 0, 00:22:16.991 "data_size": 65536 00:22:16.991 }, 00:22:16.991 { 00:22:16.991 "name": "BaseBdev4", 00:22:16.991 "uuid": "d7d5cdb7-775a-44e1-8f0e-2a680296e945", 00:22:16.991 "is_configured": true, 00:22:16.991 "data_offset": 0, 00:22:16.991 "data_size": 65536 00:22:16.991 } 00:22:16.991 ] 00:22:16.991 }' 00:22:16.991 06:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.991 06:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:17.575 [2024-08-13 06:17:19.315182] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:17.575 "name": "Existed_Raid", 00:22:17.575 "aliases": [ 00:22:17.575 "b1782826-7468-4456-b842-5c45794d93e5" 00:22:17.575 ], 00:22:17.575 "product_name": "Raid Volume", 00:22:17.575 "block_size": 512, 00:22:17.575 "num_blocks": 196608, 00:22:17.575 "uuid": "b1782826-7468-4456-b842-5c45794d93e5", 00:22:17.575 "assigned_rate_limits": { 00:22:17.575 "rw_ios_per_sec": 0, 00:22:17.575 "rw_mbytes_per_sec": 0, 00:22:17.575 "r_mbytes_per_sec": 0, 00:22:17.575 "w_mbytes_per_sec": 0 00:22:17.575 }, 00:22:17.575 "claimed": false, 00:22:17.575 "zoned": false, 00:22:17.575 "supported_io_types": { 00:22:17.575 "read": true, 00:22:17.575 "write": true, 00:22:17.575 "unmap": false, 00:22:17.575 "flush": false, 00:22:17.575 "reset": true, 00:22:17.575 "nvme_admin": false, 00:22:17.575 "nvme_io": false, 00:22:17.575 "nvme_io_md": false, 00:22:17.575 "write_zeroes": true, 00:22:17.575 "zcopy": false, 00:22:17.575 "get_zone_info": false, 00:22:17.575 "zone_management": false, 00:22:17.575 "zone_append": false, 00:22:17.575 "compare": false, 00:22:17.575 "compare_and_write": false, 00:22:17.575 "abort": false, 00:22:17.575 "seek_hole": false, 00:22:17.575 "seek_data": false, 00:22:17.575 "copy": false, 00:22:17.575 "nvme_iov_md": false 00:22:17.575 }, 00:22:17.575 "driver_specific": { 00:22:17.575 "raid": { 00:22:17.575 "uuid": "b1782826-7468-4456-b842-5c45794d93e5", 00:22:17.575 "strip_size_kb": 64, 00:22:17.575 "state": "online", 00:22:17.575 "raid_level": "raid5f", 00:22:17.575 "superblock": false, 00:22:17.575 "num_base_bdevs": 4, 00:22:17.575 "num_base_bdevs_discovered": 4, 00:22:17.575 "num_base_bdevs_operational": 4, 00:22:17.575 "base_bdevs_list": [ 00:22:17.575 { 00:22:17.575 "name": "BaseBdev1", 00:22:17.575 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:17.575 "is_configured": true, 00:22:17.575 "data_offset": 0, 00:22:17.575 "data_size": 65536 00:22:17.575 }, 00:22:17.575 { 00:22:17.575 "name": "BaseBdev2", 00:22:17.575 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:17.575 "is_configured": true, 00:22:17.575 "data_offset": 0, 00:22:17.575 "data_size": 65536 00:22:17.575 }, 00:22:17.575 { 00:22:17.575 "name": "BaseBdev3", 00:22:17.575 "uuid": "0ece9c39-97cb-44ad-aa7d-0a86c0941a81", 00:22:17.575 "is_configured": true, 00:22:17.575 "data_offset": 0, 00:22:17.575 "data_size": 65536 00:22:17.575 }, 00:22:17.575 { 00:22:17.575 "name": "BaseBdev4", 00:22:17.575 "uuid": "d7d5cdb7-775a-44e1-8f0e-2a680296e945", 00:22:17.575 "is_configured": true, 00:22:17.575 "data_offset": 0, 00:22:17.575 "data_size": 65536 00:22:17.575 } 00:22:17.575 ] 00:22:17.575 } 00:22:17.575 } 00:22:17.575 }' 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:17.575 BaseBdev2 00:22:17.575 BaseBdev3 00:22:17.575 BaseBdev4' 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.575 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.835 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:17.835 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.835 "name": "BaseBdev1", 00:22:17.835 "aliases": [ 00:22:17.835 "6c3eb2f5-1671-4719-87e1-b942a4502cb5" 00:22:17.835 ], 00:22:17.835 "product_name": "Malloc disk", 00:22:17.835 "block_size": 512, 00:22:17.835 "num_blocks": 65536, 00:22:17.835 "uuid": "6c3eb2f5-1671-4719-87e1-b942a4502cb5", 00:22:17.835 "assigned_rate_limits": { 00:22:17.835 "rw_ios_per_sec": 0, 00:22:17.835 "rw_mbytes_per_sec": 0, 00:22:17.835 "r_mbytes_per_sec": 0, 00:22:17.835 "w_mbytes_per_sec": 0 00:22:17.835 }, 00:22:17.835 "claimed": true, 00:22:17.835 "claim_type": "exclusive_write", 00:22:17.835 "zoned": false, 00:22:17.835 "supported_io_types": { 00:22:17.835 "read": true, 00:22:17.835 "write": true, 00:22:17.835 "unmap": true, 00:22:17.835 "flush": true, 00:22:17.835 "reset": true, 00:22:17.835 "nvme_admin": false, 00:22:17.835 "nvme_io": false, 00:22:17.835 "nvme_io_md": false, 00:22:17.835 "write_zeroes": true, 00:22:17.835 "zcopy": true, 00:22:17.835 "get_zone_info": false, 00:22:17.835 "zone_management": false, 00:22:17.835 "zone_append": false, 00:22:17.835 "compare": false, 00:22:17.835 "compare_and_write": false, 00:22:17.835 "abort": true, 00:22:17.835 "seek_hole": false, 00:22:17.835 "seek_data": false, 00:22:17.835 "copy": true, 00:22:17.835 "nvme_iov_md": false 00:22:17.835 }, 00:22:17.835 "memory_domains": [ 00:22:17.835 { 00:22:17.835 "dma_device_id": "system", 00:22:17.835 "dma_device_type": 1 00:22:17.835 }, 00:22:17.835 { 00:22:17.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.835 "dma_device_type": 2 00:22:17.835 } 00:22:17.835 ], 00:22:17.835 "driver_specific": {} 00:22:17.835 }' 00:22:17.835 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.835 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.095 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.354 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.354 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.354 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:18.354 06:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.354 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.354 "name": "BaseBdev2", 00:22:18.354 "aliases": [ 00:22:18.354 "d3a818c4-2a04-4136-8403-f406d03746ef" 00:22:18.354 ], 00:22:18.354 "product_name": "Malloc disk", 00:22:18.354 "block_size": 512, 00:22:18.354 "num_blocks": 65536, 00:22:18.354 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:18.354 "assigned_rate_limits": { 00:22:18.354 "rw_ios_per_sec": 0, 00:22:18.354 "rw_mbytes_per_sec": 0, 00:22:18.354 "r_mbytes_per_sec": 0, 00:22:18.354 "w_mbytes_per_sec": 0 00:22:18.354 }, 00:22:18.354 "claimed": true, 00:22:18.354 "claim_type": "exclusive_write", 00:22:18.354 "zoned": false, 00:22:18.354 "supported_io_types": { 00:22:18.354 "read": true, 00:22:18.354 "write": true, 00:22:18.354 "unmap": true, 00:22:18.354 "flush": true, 00:22:18.354 "reset": true, 00:22:18.354 "nvme_admin": false, 00:22:18.354 "nvme_io": false, 00:22:18.354 "nvme_io_md": false, 00:22:18.354 "write_zeroes": true, 00:22:18.354 "zcopy": true, 00:22:18.354 "get_zone_info": false, 00:22:18.354 "zone_management": false, 00:22:18.354 "zone_append": false, 00:22:18.354 "compare": false, 00:22:18.354 "compare_and_write": false, 00:22:18.354 "abort": true, 00:22:18.354 "seek_hole": false, 00:22:18.354 "seek_data": false, 00:22:18.354 "copy": true, 00:22:18.354 "nvme_iov_md": false 00:22:18.354 }, 00:22:18.354 "memory_domains": [ 00:22:18.354 { 00:22:18.354 "dma_device_id": "system", 00:22:18.354 "dma_device_type": 1 00:22:18.354 }, 00:22:18.354 { 00:22:18.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.354 "dma_device_type": 2 00:22:18.354 } 00:22:18.354 ], 00:22:18.354 "driver_specific": {} 00:22:18.354 }' 00:22:18.355 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.355 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.614 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.874 "name": "BaseBdev3", 00:22:18.874 "aliases": [ 00:22:18.874 "0ece9c39-97cb-44ad-aa7d-0a86c0941a81" 00:22:18.874 ], 00:22:18.874 "product_name": "Malloc disk", 00:22:18.874 "block_size": 512, 00:22:18.874 "num_blocks": 65536, 00:22:18.874 "uuid": "0ece9c39-97cb-44ad-aa7d-0a86c0941a81", 00:22:18.874 "assigned_rate_limits": { 00:22:18.874 "rw_ios_per_sec": 0, 00:22:18.874 "rw_mbytes_per_sec": 0, 00:22:18.874 "r_mbytes_per_sec": 0, 00:22:18.874 "w_mbytes_per_sec": 0 00:22:18.874 }, 00:22:18.874 "claimed": true, 00:22:18.874 "claim_type": "exclusive_write", 00:22:18.874 "zoned": false, 00:22:18.874 "supported_io_types": { 00:22:18.874 "read": true, 00:22:18.874 "write": true, 00:22:18.874 "unmap": true, 00:22:18.874 "flush": true, 00:22:18.874 "reset": true, 00:22:18.874 "nvme_admin": false, 00:22:18.874 "nvme_io": false, 00:22:18.874 "nvme_io_md": false, 00:22:18.874 "write_zeroes": true, 00:22:18.874 "zcopy": true, 00:22:18.874 "get_zone_info": false, 00:22:18.874 "zone_management": false, 00:22:18.874 "zone_append": false, 00:22:18.874 "compare": false, 00:22:18.874 "compare_and_write": false, 00:22:18.874 "abort": true, 00:22:18.874 "seek_hole": false, 00:22:18.874 "seek_data": false, 00:22:18.874 "copy": true, 00:22:18.874 "nvme_iov_md": false 00:22:18.874 }, 00:22:18.874 "memory_domains": [ 00:22:18.874 { 00:22:18.874 "dma_device_id": "system", 00:22:18.874 "dma_device_type": 1 00:22:18.874 }, 00:22:18.874 { 00:22:18.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.874 "dma_device_type": 2 00:22:18.874 } 00:22:18.874 ], 00:22:18.874 "driver_specific": {} 00:22:18.874 }' 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.874 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.134 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.393 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.393 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.393 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:19.393 06:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.394 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.394 "name": "BaseBdev4", 00:22:19.394 "aliases": [ 00:22:19.394 "d7d5cdb7-775a-44e1-8f0e-2a680296e945" 00:22:19.394 ], 00:22:19.394 "product_name": "Malloc disk", 00:22:19.394 "block_size": 512, 00:22:19.394 "num_blocks": 65536, 00:22:19.394 "uuid": "d7d5cdb7-775a-44e1-8f0e-2a680296e945", 00:22:19.394 "assigned_rate_limits": { 00:22:19.394 "rw_ios_per_sec": 0, 00:22:19.394 "rw_mbytes_per_sec": 0, 00:22:19.394 "r_mbytes_per_sec": 0, 00:22:19.394 "w_mbytes_per_sec": 0 00:22:19.394 }, 00:22:19.394 "claimed": true, 00:22:19.394 "claim_type": "exclusive_write", 00:22:19.394 "zoned": false, 00:22:19.394 "supported_io_types": { 00:22:19.394 "read": true, 00:22:19.394 "write": true, 00:22:19.394 "unmap": true, 00:22:19.394 "flush": true, 00:22:19.394 "reset": true, 00:22:19.394 "nvme_admin": false, 00:22:19.394 "nvme_io": false, 00:22:19.394 "nvme_io_md": false, 00:22:19.394 "write_zeroes": true, 00:22:19.394 "zcopy": true, 00:22:19.394 "get_zone_info": false, 00:22:19.394 "zone_management": false, 00:22:19.394 "zone_append": false, 00:22:19.394 "compare": false, 00:22:19.394 "compare_and_write": false, 00:22:19.394 "abort": true, 00:22:19.394 "seek_hole": false, 00:22:19.394 "seek_data": false, 00:22:19.394 "copy": true, 00:22:19.394 "nvme_iov_md": false 00:22:19.394 }, 00:22:19.394 "memory_domains": [ 00:22:19.394 { 00:22:19.394 "dma_device_id": "system", 00:22:19.394 "dma_device_type": 1 00:22:19.394 }, 00:22:19.394 { 00:22:19.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.394 "dma_device_type": 2 00:22:19.394 } 00:22:19.394 ], 00:22:19.394 "driver_specific": {} 00:22:19.394 }' 00:22:19.394 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.394 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.394 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.394 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.654 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:19.914 [2024-08-13 06:17:21.591194] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.914 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.174 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.174 "name": "Existed_Raid", 00:22:20.174 "uuid": "b1782826-7468-4456-b842-5c45794d93e5", 00:22:20.174 "strip_size_kb": 64, 00:22:20.174 "state": "online", 00:22:20.174 "raid_level": "raid5f", 00:22:20.174 "superblock": false, 00:22:20.174 "num_base_bdevs": 4, 00:22:20.174 "num_base_bdevs_discovered": 3, 00:22:20.174 "num_base_bdevs_operational": 3, 00:22:20.174 "base_bdevs_list": [ 00:22:20.174 { 00:22:20.174 "name": null, 00:22:20.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.174 "is_configured": false, 00:22:20.174 "data_offset": 0, 00:22:20.174 "data_size": 65536 00:22:20.174 }, 00:22:20.175 { 00:22:20.175 "name": "BaseBdev2", 00:22:20.175 "uuid": "d3a818c4-2a04-4136-8403-f406d03746ef", 00:22:20.175 "is_configured": true, 00:22:20.175 "data_offset": 0, 00:22:20.175 "data_size": 65536 00:22:20.175 }, 00:22:20.175 { 00:22:20.175 "name": "BaseBdev3", 00:22:20.175 "uuid": "0ece9c39-97cb-44ad-aa7d-0a86c0941a81", 00:22:20.175 "is_configured": true, 00:22:20.175 "data_offset": 0, 00:22:20.175 "data_size": 65536 00:22:20.175 }, 00:22:20.175 { 00:22:20.175 "name": "BaseBdev4", 00:22:20.175 "uuid": "d7d5cdb7-775a-44e1-8f0e-2a680296e945", 00:22:20.175 "is_configured": true, 00:22:20.175 "data_offset": 0, 00:22:20.175 "data_size": 65536 00:22:20.175 } 00:22:20.175 ] 00:22:20.175 }' 00:22:20.175 06:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.175 06:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.745 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:20.745 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:20.745 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.745 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.011 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.012 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.012 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:21.012 [2024-08-13 06:17:22.744726] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.012 [2024-08-13 06:17:22.744825] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:21.012 [2024-08-13 06:17:22.755495] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.012 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:21.012 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.012 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.012 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.274 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.274 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.274 06:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:21.534 [2024-08-13 06:17:23.154918] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:21.534 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:21.534 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.534 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.534 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.795 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.795 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.795 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:21.795 [2024-08-13 06:17:23.553314] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:21.795 [2024-08-13 06:17:23.553366] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:22.055 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:22.326 BaseBdev2 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:22.326 06:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.588 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:22.588 [ 00:22:22.588 { 00:22:22.588 "name": "BaseBdev2", 00:22:22.588 "aliases": [ 00:22:22.588 "f1a53e0a-338d-462a-b205-88e714155e44" 00:22:22.588 ], 00:22:22.588 "product_name": "Malloc disk", 00:22:22.588 "block_size": 512, 00:22:22.588 "num_blocks": 65536, 00:22:22.588 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:22.588 "assigned_rate_limits": { 00:22:22.588 "rw_ios_per_sec": 0, 00:22:22.588 "rw_mbytes_per_sec": 0, 00:22:22.588 "r_mbytes_per_sec": 0, 00:22:22.588 "w_mbytes_per_sec": 0 00:22:22.588 }, 00:22:22.588 "claimed": false, 00:22:22.588 "zoned": false, 00:22:22.588 "supported_io_types": { 00:22:22.588 "read": true, 00:22:22.588 "write": true, 00:22:22.588 "unmap": true, 00:22:22.588 "flush": true, 00:22:22.588 "reset": true, 00:22:22.588 "nvme_admin": false, 00:22:22.588 "nvme_io": false, 00:22:22.588 "nvme_io_md": false, 00:22:22.588 "write_zeroes": true, 00:22:22.588 "zcopy": true, 00:22:22.588 "get_zone_info": false, 00:22:22.588 "zone_management": false, 00:22:22.588 "zone_append": false, 00:22:22.588 "compare": false, 00:22:22.588 "compare_and_write": false, 00:22:22.588 "abort": true, 00:22:22.588 "seek_hole": false, 00:22:22.588 "seek_data": false, 00:22:22.588 "copy": true, 00:22:22.588 "nvme_iov_md": false 00:22:22.588 }, 00:22:22.588 "memory_domains": [ 00:22:22.588 { 00:22:22.588 "dma_device_id": "system", 00:22:22.588 "dma_device_type": 1 00:22:22.588 }, 00:22:22.588 { 00:22:22.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.588 "dma_device_type": 2 00:22:22.588 } 00:22:22.588 ], 00:22:22.588 "driver_specific": {} 00:22:22.588 } 00:22:22.588 ] 00:22:22.588 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:22.588 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:22.588 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:22.589 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:22.848 BaseBdev3 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:22.848 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.108 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:23.367 [ 00:22:23.367 { 00:22:23.367 "name": "BaseBdev3", 00:22:23.367 "aliases": [ 00:22:23.367 "64ffad44-471b-4a66-b215-47ff62e49275" 00:22:23.367 ], 00:22:23.367 "product_name": "Malloc disk", 00:22:23.367 "block_size": 512, 00:22:23.367 "num_blocks": 65536, 00:22:23.367 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:23.367 "assigned_rate_limits": { 00:22:23.367 "rw_ios_per_sec": 0, 00:22:23.367 "rw_mbytes_per_sec": 0, 00:22:23.367 "r_mbytes_per_sec": 0, 00:22:23.367 "w_mbytes_per_sec": 0 00:22:23.367 }, 00:22:23.367 "claimed": false, 00:22:23.367 "zoned": false, 00:22:23.367 "supported_io_types": { 00:22:23.367 "read": true, 00:22:23.367 "write": true, 00:22:23.367 "unmap": true, 00:22:23.367 "flush": true, 00:22:23.367 "reset": true, 00:22:23.367 "nvme_admin": false, 00:22:23.367 "nvme_io": false, 00:22:23.367 "nvme_io_md": false, 00:22:23.367 "write_zeroes": true, 00:22:23.367 "zcopy": true, 00:22:23.367 "get_zone_info": false, 00:22:23.367 "zone_management": false, 00:22:23.367 "zone_append": false, 00:22:23.367 "compare": false, 00:22:23.367 "compare_and_write": false, 00:22:23.367 "abort": true, 00:22:23.367 "seek_hole": false, 00:22:23.367 "seek_data": false, 00:22:23.367 "copy": true, 00:22:23.367 "nvme_iov_md": false 00:22:23.367 }, 00:22:23.367 "memory_domains": [ 00:22:23.367 { 00:22:23.367 "dma_device_id": "system", 00:22:23.368 "dma_device_type": 1 00:22:23.368 }, 00:22:23.368 { 00:22:23.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.368 "dma_device_type": 2 00:22:23.368 } 00:22:23.368 ], 00:22:23.368 "driver_specific": {} 00:22:23.368 } 00:22:23.368 ] 00:22:23.368 06:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:23.368 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:23.368 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:23.368 06:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:23.368 BaseBdev4 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:23.368 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.627 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:23.887 [ 00:22:23.887 { 00:22:23.887 "name": "BaseBdev4", 00:22:23.887 "aliases": [ 00:22:23.887 "e6606787-6a26-4549-a94a-e611cf69a64c" 00:22:23.887 ], 00:22:23.887 "product_name": "Malloc disk", 00:22:23.887 "block_size": 512, 00:22:23.887 "num_blocks": 65536, 00:22:23.887 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:23.887 "assigned_rate_limits": { 00:22:23.887 "rw_ios_per_sec": 0, 00:22:23.887 "rw_mbytes_per_sec": 0, 00:22:23.887 "r_mbytes_per_sec": 0, 00:22:23.887 "w_mbytes_per_sec": 0 00:22:23.887 }, 00:22:23.887 "claimed": false, 00:22:23.887 "zoned": false, 00:22:23.887 "supported_io_types": { 00:22:23.887 "read": true, 00:22:23.887 "write": true, 00:22:23.887 "unmap": true, 00:22:23.887 "flush": true, 00:22:23.887 "reset": true, 00:22:23.887 "nvme_admin": false, 00:22:23.887 "nvme_io": false, 00:22:23.887 "nvme_io_md": false, 00:22:23.887 "write_zeroes": true, 00:22:23.887 "zcopy": true, 00:22:23.887 "get_zone_info": false, 00:22:23.887 "zone_management": false, 00:22:23.887 "zone_append": false, 00:22:23.887 "compare": false, 00:22:23.887 "compare_and_write": false, 00:22:23.887 "abort": true, 00:22:23.887 "seek_hole": false, 00:22:23.887 "seek_data": false, 00:22:23.887 "copy": true, 00:22:23.887 "nvme_iov_md": false 00:22:23.887 }, 00:22:23.887 "memory_domains": [ 00:22:23.887 { 00:22:23.887 "dma_device_id": "system", 00:22:23.887 "dma_device_type": 1 00:22:23.887 }, 00:22:23.887 { 00:22:23.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.887 "dma_device_type": 2 00:22:23.887 } 00:22:23.887 ], 00:22:23.887 "driver_specific": {} 00:22:23.887 } 00:22:23.887 ] 00:22:23.887 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:23.887 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:23.887 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:23.887 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:24.147 [2024-08-13 06:17:25.686261] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:24.147 [2024-08-13 06:17:25.686310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:24.147 [2024-08-13 06:17:25.686327] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.147 [2024-08-13 06:17:25.687991] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:24.147 [2024-08-13 06:17:25.688053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.147 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.147 "name": "Existed_Raid", 00:22:24.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.147 "strip_size_kb": 64, 00:22:24.147 "state": "configuring", 00:22:24.147 "raid_level": "raid5f", 00:22:24.147 "superblock": false, 00:22:24.147 "num_base_bdevs": 4, 00:22:24.147 "num_base_bdevs_discovered": 3, 00:22:24.147 "num_base_bdevs_operational": 4, 00:22:24.147 "base_bdevs_list": [ 00:22:24.147 { 00:22:24.147 "name": "BaseBdev1", 00:22:24.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.147 "is_configured": false, 00:22:24.147 "data_offset": 0, 00:22:24.147 "data_size": 0 00:22:24.147 }, 00:22:24.147 { 00:22:24.147 "name": "BaseBdev2", 00:22:24.147 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:24.147 "is_configured": true, 00:22:24.147 "data_offset": 0, 00:22:24.147 "data_size": 65536 00:22:24.147 }, 00:22:24.147 { 00:22:24.147 "name": "BaseBdev3", 00:22:24.147 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:24.147 "is_configured": true, 00:22:24.147 "data_offset": 0, 00:22:24.147 "data_size": 65536 00:22:24.147 }, 00:22:24.147 { 00:22:24.147 "name": "BaseBdev4", 00:22:24.147 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:24.147 "is_configured": true, 00:22:24.147 "data_offset": 0, 00:22:24.147 "data_size": 65536 00:22:24.147 } 00:22:24.147 ] 00:22:24.147 }' 00:22:24.148 06:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.148 06:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.718 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:24.977 [2024-08-13 06:17:26.644549] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.977 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.978 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.237 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.237 "name": "Existed_Raid", 00:22:25.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.237 "strip_size_kb": 64, 00:22:25.237 "state": "configuring", 00:22:25.237 "raid_level": "raid5f", 00:22:25.237 "superblock": false, 00:22:25.237 "num_base_bdevs": 4, 00:22:25.237 "num_base_bdevs_discovered": 2, 00:22:25.237 "num_base_bdevs_operational": 4, 00:22:25.237 "base_bdevs_list": [ 00:22:25.237 { 00:22:25.237 "name": "BaseBdev1", 00:22:25.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.237 "is_configured": false, 00:22:25.237 "data_offset": 0, 00:22:25.237 "data_size": 0 00:22:25.237 }, 00:22:25.237 { 00:22:25.237 "name": null, 00:22:25.237 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:25.237 "is_configured": false, 00:22:25.237 "data_offset": 0, 00:22:25.237 "data_size": 65536 00:22:25.237 }, 00:22:25.237 { 00:22:25.237 "name": "BaseBdev3", 00:22:25.237 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:25.237 "is_configured": true, 00:22:25.237 "data_offset": 0, 00:22:25.237 "data_size": 65536 00:22:25.237 }, 00:22:25.237 { 00:22:25.237 "name": "BaseBdev4", 00:22:25.237 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:25.237 "is_configured": true, 00:22:25.237 "data_offset": 0, 00:22:25.237 "data_size": 65536 00:22:25.237 } 00:22:25.237 ] 00:22:25.237 }' 00:22:25.237 06:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.237 06:17:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.806 06:17:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.806 06:17:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:26.065 [2024-08-13 06:17:27.801481] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.065 BaseBdev1 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:26.065 06:17:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.324 06:17:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:26.583 [ 00:22:26.583 { 00:22:26.583 "name": "BaseBdev1", 00:22:26.583 "aliases": [ 00:22:26.583 "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7" 00:22:26.583 ], 00:22:26.583 "product_name": "Malloc disk", 00:22:26.583 "block_size": 512, 00:22:26.583 "num_blocks": 65536, 00:22:26.583 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:26.583 "assigned_rate_limits": { 00:22:26.583 "rw_ios_per_sec": 0, 00:22:26.583 "rw_mbytes_per_sec": 0, 00:22:26.583 "r_mbytes_per_sec": 0, 00:22:26.583 "w_mbytes_per_sec": 0 00:22:26.583 }, 00:22:26.583 "claimed": true, 00:22:26.583 "claim_type": "exclusive_write", 00:22:26.583 "zoned": false, 00:22:26.583 "supported_io_types": { 00:22:26.583 "read": true, 00:22:26.583 "write": true, 00:22:26.583 "unmap": true, 00:22:26.583 "flush": true, 00:22:26.583 "reset": true, 00:22:26.583 "nvme_admin": false, 00:22:26.583 "nvme_io": false, 00:22:26.583 "nvme_io_md": false, 00:22:26.583 "write_zeroes": true, 00:22:26.583 "zcopy": true, 00:22:26.583 "get_zone_info": false, 00:22:26.583 "zone_management": false, 00:22:26.583 "zone_append": false, 00:22:26.583 "compare": false, 00:22:26.583 "compare_and_write": false, 00:22:26.583 "abort": true, 00:22:26.583 "seek_hole": false, 00:22:26.583 "seek_data": false, 00:22:26.583 "copy": true, 00:22:26.583 "nvme_iov_md": false 00:22:26.583 }, 00:22:26.583 "memory_domains": [ 00:22:26.583 { 00:22:26.583 "dma_device_id": "system", 00:22:26.583 "dma_device_type": 1 00:22:26.583 }, 00:22:26.583 { 00:22:26.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.583 "dma_device_type": 2 00:22:26.583 } 00:22:26.583 ], 00:22:26.583 "driver_specific": {} 00:22:26.583 } 00:22:26.583 ] 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.583 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.842 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.842 "name": "Existed_Raid", 00:22:26.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.842 "strip_size_kb": 64, 00:22:26.842 "state": "configuring", 00:22:26.842 "raid_level": "raid5f", 00:22:26.842 "superblock": false, 00:22:26.842 "num_base_bdevs": 4, 00:22:26.842 "num_base_bdevs_discovered": 3, 00:22:26.842 "num_base_bdevs_operational": 4, 00:22:26.842 "base_bdevs_list": [ 00:22:26.842 { 00:22:26.842 "name": "BaseBdev1", 00:22:26.842 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:26.842 "is_configured": true, 00:22:26.842 "data_offset": 0, 00:22:26.842 "data_size": 65536 00:22:26.842 }, 00:22:26.842 { 00:22:26.842 "name": null, 00:22:26.842 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:26.842 "is_configured": false, 00:22:26.842 "data_offset": 0, 00:22:26.842 "data_size": 65536 00:22:26.842 }, 00:22:26.842 { 00:22:26.842 "name": "BaseBdev3", 00:22:26.842 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:26.842 "is_configured": true, 00:22:26.842 "data_offset": 0, 00:22:26.842 "data_size": 65536 00:22:26.842 }, 00:22:26.842 { 00:22:26.842 "name": "BaseBdev4", 00:22:26.842 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:26.842 "is_configured": true, 00:22:26.842 "data_offset": 0, 00:22:26.842 "data_size": 65536 00:22:26.842 } 00:22:26.842 ] 00:22:26.842 }' 00:22:26.842 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.842 06:17:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.410 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:27.411 06:17:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.411 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:27.411 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:27.669 [2024-08-13 06:17:29.306988] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:27.669 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:27.669 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:27.669 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.670 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.928 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.928 "name": "Existed_Raid", 00:22:27.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.928 "strip_size_kb": 64, 00:22:27.928 "state": "configuring", 00:22:27.928 "raid_level": "raid5f", 00:22:27.928 "superblock": false, 00:22:27.928 "num_base_bdevs": 4, 00:22:27.928 "num_base_bdevs_discovered": 2, 00:22:27.928 "num_base_bdevs_operational": 4, 00:22:27.928 "base_bdevs_list": [ 00:22:27.928 { 00:22:27.928 "name": "BaseBdev1", 00:22:27.928 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:27.928 "is_configured": true, 00:22:27.928 "data_offset": 0, 00:22:27.928 "data_size": 65536 00:22:27.928 }, 00:22:27.928 { 00:22:27.928 "name": null, 00:22:27.928 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:27.928 "is_configured": false, 00:22:27.928 "data_offset": 0, 00:22:27.928 "data_size": 65536 00:22:27.928 }, 00:22:27.928 { 00:22:27.928 "name": null, 00:22:27.928 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:27.928 "is_configured": false, 00:22:27.928 "data_offset": 0, 00:22:27.928 "data_size": 65536 00:22:27.928 }, 00:22:27.928 { 00:22:27.928 "name": "BaseBdev4", 00:22:27.928 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:27.928 "is_configured": true, 00:22:27.928 "data_offset": 0, 00:22:27.928 "data_size": 65536 00:22:27.928 } 00:22:27.928 ] 00:22:27.928 }' 00:22:27.928 06:17:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.928 06:17:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.525 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:28.525 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:28.525 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:28.784 [2024-08-13 06:17:30.433156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.784 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.043 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.043 "name": "Existed_Raid", 00:22:29.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.043 "strip_size_kb": 64, 00:22:29.043 "state": "configuring", 00:22:29.043 "raid_level": "raid5f", 00:22:29.043 "superblock": false, 00:22:29.043 "num_base_bdevs": 4, 00:22:29.043 "num_base_bdevs_discovered": 3, 00:22:29.043 "num_base_bdevs_operational": 4, 00:22:29.043 "base_bdevs_list": [ 00:22:29.043 { 00:22:29.043 "name": "BaseBdev1", 00:22:29.043 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:29.043 "is_configured": true, 00:22:29.043 "data_offset": 0, 00:22:29.043 "data_size": 65536 00:22:29.043 }, 00:22:29.043 { 00:22:29.043 "name": null, 00:22:29.043 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:29.043 "is_configured": false, 00:22:29.043 "data_offset": 0, 00:22:29.043 "data_size": 65536 00:22:29.043 }, 00:22:29.043 { 00:22:29.043 "name": "BaseBdev3", 00:22:29.043 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:29.043 "is_configured": true, 00:22:29.043 "data_offset": 0, 00:22:29.043 "data_size": 65536 00:22:29.043 }, 00:22:29.043 { 00:22:29.043 "name": "BaseBdev4", 00:22:29.043 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:29.043 "is_configured": true, 00:22:29.043 "data_offset": 0, 00:22:29.043 "data_size": 65536 00:22:29.043 } 00:22:29.043 ] 00:22:29.043 }' 00:22:29.043 06:17:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.043 06:17:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.611 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.611 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:29.870 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:29.871 [2024-08-13 06:17:31.619122] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.871 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.130 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.130 "name": "Existed_Raid", 00:22:30.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.130 "strip_size_kb": 64, 00:22:30.130 "state": "configuring", 00:22:30.130 "raid_level": "raid5f", 00:22:30.130 "superblock": false, 00:22:30.130 "num_base_bdevs": 4, 00:22:30.130 "num_base_bdevs_discovered": 2, 00:22:30.130 "num_base_bdevs_operational": 4, 00:22:30.130 "base_bdevs_list": [ 00:22:30.130 { 00:22:30.131 "name": null, 00:22:30.131 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:30.131 "is_configured": false, 00:22:30.131 "data_offset": 0, 00:22:30.131 "data_size": 65536 00:22:30.131 }, 00:22:30.131 { 00:22:30.131 "name": null, 00:22:30.131 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:30.131 "is_configured": false, 00:22:30.131 "data_offset": 0, 00:22:30.131 "data_size": 65536 00:22:30.131 }, 00:22:30.131 { 00:22:30.131 "name": "BaseBdev3", 00:22:30.131 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:30.131 "is_configured": true, 00:22:30.131 "data_offset": 0, 00:22:30.131 "data_size": 65536 00:22:30.131 }, 00:22:30.131 { 00:22:30.131 "name": "BaseBdev4", 00:22:30.131 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:30.131 "is_configured": true, 00:22:30.131 "data_offset": 0, 00:22:30.131 "data_size": 65536 00:22:30.131 } 00:22:30.131 ] 00:22:30.131 }' 00:22:30.131 06:17:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.131 06:17:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.700 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:30.700 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:30.959 [2024-08-13 06:17:32.703968] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.959 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.219 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.219 "name": "Existed_Raid", 00:22:31.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.219 "strip_size_kb": 64, 00:22:31.219 "state": "configuring", 00:22:31.219 "raid_level": "raid5f", 00:22:31.219 "superblock": false, 00:22:31.219 "num_base_bdevs": 4, 00:22:31.219 "num_base_bdevs_discovered": 3, 00:22:31.219 "num_base_bdevs_operational": 4, 00:22:31.219 "base_bdevs_list": [ 00:22:31.219 { 00:22:31.219 "name": null, 00:22:31.219 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:31.219 "is_configured": false, 00:22:31.219 "data_offset": 0, 00:22:31.219 "data_size": 65536 00:22:31.219 }, 00:22:31.219 { 00:22:31.219 "name": "BaseBdev2", 00:22:31.219 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:31.219 "is_configured": true, 00:22:31.219 "data_offset": 0, 00:22:31.219 "data_size": 65536 00:22:31.219 }, 00:22:31.219 { 00:22:31.219 "name": "BaseBdev3", 00:22:31.219 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:31.219 "is_configured": true, 00:22:31.219 "data_offset": 0, 00:22:31.219 "data_size": 65536 00:22:31.219 }, 00:22:31.219 { 00:22:31.219 "name": "BaseBdev4", 00:22:31.219 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:31.219 "is_configured": true, 00:22:31.219 "data_offset": 0, 00:22:31.219 "data_size": 65536 00:22:31.219 } 00:22:31.219 ] 00:22:31.219 }' 00:22:31.219 06:17:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.219 06:17:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.788 06:17:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.789 06:17:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:32.052 06:17:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:32.052 06:17:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.052 06:17:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:32.312 06:17:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bb9c69e8-5bb3-4c9c-831f-eda40b9947e7 00:22:32.312 [2024-08-13 06:17:34.032548] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:32.312 [2024-08-13 06:17:34.032595] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:22:32.312 [2024-08-13 06:17:34.032603] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:32.312 [2024-08-13 06:17:34.032842] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:22:32.312 [2024-08-13 06:17:34.033312] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:22:32.312 [2024-08-13 06:17:34.033329] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:22:32.312 [2024-08-13 06:17:34.033487] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.312 NewBaseBdev 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:32.312 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:32.571 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:32.831 [ 00:22:32.831 { 00:22:32.831 "name": "NewBaseBdev", 00:22:32.831 "aliases": [ 00:22:32.831 "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7" 00:22:32.831 ], 00:22:32.831 "product_name": "Malloc disk", 00:22:32.831 "block_size": 512, 00:22:32.831 "num_blocks": 65536, 00:22:32.831 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:32.831 "assigned_rate_limits": { 00:22:32.831 "rw_ios_per_sec": 0, 00:22:32.831 "rw_mbytes_per_sec": 0, 00:22:32.831 "r_mbytes_per_sec": 0, 00:22:32.831 "w_mbytes_per_sec": 0 00:22:32.831 }, 00:22:32.831 "claimed": true, 00:22:32.831 "claim_type": "exclusive_write", 00:22:32.831 "zoned": false, 00:22:32.831 "supported_io_types": { 00:22:32.831 "read": true, 00:22:32.831 "write": true, 00:22:32.831 "unmap": true, 00:22:32.831 "flush": true, 00:22:32.831 "reset": true, 00:22:32.831 "nvme_admin": false, 00:22:32.831 "nvme_io": false, 00:22:32.831 "nvme_io_md": false, 00:22:32.831 "write_zeroes": true, 00:22:32.831 "zcopy": true, 00:22:32.831 "get_zone_info": false, 00:22:32.831 "zone_management": false, 00:22:32.831 "zone_append": false, 00:22:32.831 "compare": false, 00:22:32.831 "compare_and_write": false, 00:22:32.831 "abort": true, 00:22:32.831 "seek_hole": false, 00:22:32.831 "seek_data": false, 00:22:32.831 "copy": true, 00:22:32.831 "nvme_iov_md": false 00:22:32.831 }, 00:22:32.831 "memory_domains": [ 00:22:32.831 { 00:22:32.831 "dma_device_id": "system", 00:22:32.831 "dma_device_type": 1 00:22:32.831 }, 00:22:32.831 { 00:22:32.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.831 "dma_device_type": 2 00:22:32.831 } 00:22:32.831 ], 00:22:32.831 "driver_specific": {} 00:22:32.831 } 00:22:32.831 ] 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.831 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.091 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.091 "name": "Existed_Raid", 00:22:33.091 "uuid": "c24fade9-f6ce-4b3e-87f0-fea9e5e0886c", 00:22:33.091 "strip_size_kb": 64, 00:22:33.091 "state": "online", 00:22:33.091 "raid_level": "raid5f", 00:22:33.091 "superblock": false, 00:22:33.091 "num_base_bdevs": 4, 00:22:33.091 "num_base_bdevs_discovered": 4, 00:22:33.091 "num_base_bdevs_operational": 4, 00:22:33.091 "base_bdevs_list": [ 00:22:33.091 { 00:22:33.091 "name": "NewBaseBdev", 00:22:33.091 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:33.091 "is_configured": true, 00:22:33.091 "data_offset": 0, 00:22:33.091 "data_size": 65536 00:22:33.091 }, 00:22:33.091 { 00:22:33.091 "name": "BaseBdev2", 00:22:33.091 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:33.091 "is_configured": true, 00:22:33.091 "data_offset": 0, 00:22:33.091 "data_size": 65536 00:22:33.091 }, 00:22:33.091 { 00:22:33.091 "name": "BaseBdev3", 00:22:33.091 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:33.091 "is_configured": true, 00:22:33.091 "data_offset": 0, 00:22:33.091 "data_size": 65536 00:22:33.091 }, 00:22:33.091 { 00:22:33.091 "name": "BaseBdev4", 00:22:33.091 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:33.091 "is_configured": true, 00:22:33.091 "data_offset": 0, 00:22:33.091 "data_size": 65536 00:22:33.091 } 00:22:33.091 ] 00:22:33.091 }' 00:22:33.091 06:17:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.091 06:17:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:33.661 [2024-08-13 06:17:35.338553] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:33.661 "name": "Existed_Raid", 00:22:33.661 "aliases": [ 00:22:33.661 "c24fade9-f6ce-4b3e-87f0-fea9e5e0886c" 00:22:33.661 ], 00:22:33.661 "product_name": "Raid Volume", 00:22:33.661 "block_size": 512, 00:22:33.661 "num_blocks": 196608, 00:22:33.661 "uuid": "c24fade9-f6ce-4b3e-87f0-fea9e5e0886c", 00:22:33.661 "assigned_rate_limits": { 00:22:33.661 "rw_ios_per_sec": 0, 00:22:33.661 "rw_mbytes_per_sec": 0, 00:22:33.661 "r_mbytes_per_sec": 0, 00:22:33.661 "w_mbytes_per_sec": 0 00:22:33.661 }, 00:22:33.661 "claimed": false, 00:22:33.661 "zoned": false, 00:22:33.661 "supported_io_types": { 00:22:33.661 "read": true, 00:22:33.661 "write": true, 00:22:33.661 "unmap": false, 00:22:33.661 "flush": false, 00:22:33.661 "reset": true, 00:22:33.661 "nvme_admin": false, 00:22:33.661 "nvme_io": false, 00:22:33.661 "nvme_io_md": false, 00:22:33.661 "write_zeroes": true, 00:22:33.661 "zcopy": false, 00:22:33.661 "get_zone_info": false, 00:22:33.661 "zone_management": false, 00:22:33.661 "zone_append": false, 00:22:33.661 "compare": false, 00:22:33.661 "compare_and_write": false, 00:22:33.661 "abort": false, 00:22:33.661 "seek_hole": false, 00:22:33.661 "seek_data": false, 00:22:33.661 "copy": false, 00:22:33.661 "nvme_iov_md": false 00:22:33.661 }, 00:22:33.661 "driver_specific": { 00:22:33.661 "raid": { 00:22:33.661 "uuid": "c24fade9-f6ce-4b3e-87f0-fea9e5e0886c", 00:22:33.661 "strip_size_kb": 64, 00:22:33.661 "state": "online", 00:22:33.661 "raid_level": "raid5f", 00:22:33.661 "superblock": false, 00:22:33.661 "num_base_bdevs": 4, 00:22:33.661 "num_base_bdevs_discovered": 4, 00:22:33.661 "num_base_bdevs_operational": 4, 00:22:33.661 "base_bdevs_list": [ 00:22:33.661 { 00:22:33.661 "name": "NewBaseBdev", 00:22:33.661 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:33.661 "is_configured": true, 00:22:33.661 "data_offset": 0, 00:22:33.661 "data_size": 65536 00:22:33.661 }, 00:22:33.661 { 00:22:33.661 "name": "BaseBdev2", 00:22:33.661 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:33.661 "is_configured": true, 00:22:33.661 "data_offset": 0, 00:22:33.661 "data_size": 65536 00:22:33.661 }, 00:22:33.661 { 00:22:33.661 "name": "BaseBdev3", 00:22:33.661 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:33.661 "is_configured": true, 00:22:33.661 "data_offset": 0, 00:22:33.661 "data_size": 65536 00:22:33.661 }, 00:22:33.661 { 00:22:33.661 "name": "BaseBdev4", 00:22:33.661 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:33.661 "is_configured": true, 00:22:33.661 "data_offset": 0, 00:22:33.661 "data_size": 65536 00:22:33.661 } 00:22:33.661 ] 00:22:33.661 } 00:22:33.661 } 00:22:33.661 }' 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:33.661 BaseBdev2 00:22:33.661 BaseBdev3 00:22:33.661 BaseBdev4' 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:33.661 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:33.921 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:33.921 "name": "NewBaseBdev", 00:22:33.921 "aliases": [ 00:22:33.921 "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7" 00:22:33.921 ], 00:22:33.921 "product_name": "Malloc disk", 00:22:33.921 "block_size": 512, 00:22:33.921 "num_blocks": 65536, 00:22:33.921 "uuid": "bb9c69e8-5bb3-4c9c-831f-eda40b9947e7", 00:22:33.921 "assigned_rate_limits": { 00:22:33.921 "rw_ios_per_sec": 0, 00:22:33.921 "rw_mbytes_per_sec": 0, 00:22:33.921 "r_mbytes_per_sec": 0, 00:22:33.921 "w_mbytes_per_sec": 0 00:22:33.921 }, 00:22:33.921 "claimed": true, 00:22:33.921 "claim_type": "exclusive_write", 00:22:33.921 "zoned": false, 00:22:33.921 "supported_io_types": { 00:22:33.921 "read": true, 00:22:33.921 "write": true, 00:22:33.921 "unmap": true, 00:22:33.921 "flush": true, 00:22:33.921 "reset": true, 00:22:33.921 "nvme_admin": false, 00:22:33.921 "nvme_io": false, 00:22:33.921 "nvme_io_md": false, 00:22:33.921 "write_zeroes": true, 00:22:33.921 "zcopy": true, 00:22:33.921 "get_zone_info": false, 00:22:33.921 "zone_management": false, 00:22:33.921 "zone_append": false, 00:22:33.921 "compare": false, 00:22:33.921 "compare_and_write": false, 00:22:33.921 "abort": true, 00:22:33.921 "seek_hole": false, 00:22:33.921 "seek_data": false, 00:22:33.921 "copy": true, 00:22:33.921 "nvme_iov_md": false 00:22:33.921 }, 00:22:33.921 "memory_domains": [ 00:22:33.921 { 00:22:33.921 "dma_device_id": "system", 00:22:33.921 "dma_device_type": 1 00:22:33.921 }, 00:22:33.921 { 00:22:33.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.921 "dma_device_type": 2 00:22:33.921 } 00:22:33.921 ], 00:22:33.921 "driver_specific": {} 00:22:33.921 }' 00:22:33.921 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.921 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.921 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:33.921 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.921 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.181 06:17:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:34.441 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.441 "name": "BaseBdev2", 00:22:34.441 "aliases": [ 00:22:34.441 "f1a53e0a-338d-462a-b205-88e714155e44" 00:22:34.441 ], 00:22:34.441 "product_name": "Malloc disk", 00:22:34.441 "block_size": 512, 00:22:34.441 "num_blocks": 65536, 00:22:34.441 "uuid": "f1a53e0a-338d-462a-b205-88e714155e44", 00:22:34.441 "assigned_rate_limits": { 00:22:34.441 "rw_ios_per_sec": 0, 00:22:34.441 "rw_mbytes_per_sec": 0, 00:22:34.441 "r_mbytes_per_sec": 0, 00:22:34.441 "w_mbytes_per_sec": 0 00:22:34.441 }, 00:22:34.441 "claimed": true, 00:22:34.441 "claim_type": "exclusive_write", 00:22:34.441 "zoned": false, 00:22:34.441 "supported_io_types": { 00:22:34.441 "read": true, 00:22:34.441 "write": true, 00:22:34.441 "unmap": true, 00:22:34.441 "flush": true, 00:22:34.441 "reset": true, 00:22:34.441 "nvme_admin": false, 00:22:34.441 "nvme_io": false, 00:22:34.441 "nvme_io_md": false, 00:22:34.441 "write_zeroes": true, 00:22:34.441 "zcopy": true, 00:22:34.441 "get_zone_info": false, 00:22:34.441 "zone_management": false, 00:22:34.441 "zone_append": false, 00:22:34.441 "compare": false, 00:22:34.441 "compare_and_write": false, 00:22:34.441 "abort": true, 00:22:34.441 "seek_hole": false, 00:22:34.441 "seek_data": false, 00:22:34.441 "copy": true, 00:22:34.441 "nvme_iov_md": false 00:22:34.441 }, 00:22:34.441 "memory_domains": [ 00:22:34.441 { 00:22:34.441 "dma_device_id": "system", 00:22:34.441 "dma_device_type": 1 00:22:34.441 }, 00:22:34.441 { 00:22:34.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.441 "dma_device_type": 2 00:22:34.441 } 00:22:34.441 ], 00:22:34.441 "driver_specific": {} 00:22:34.441 }' 00:22:34.441 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.441 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.441 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.441 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:34.701 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.961 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.961 "name": "BaseBdev3", 00:22:34.961 "aliases": [ 00:22:34.961 "64ffad44-471b-4a66-b215-47ff62e49275" 00:22:34.961 ], 00:22:34.961 "product_name": "Malloc disk", 00:22:34.961 "block_size": 512, 00:22:34.961 "num_blocks": 65536, 00:22:34.961 "uuid": "64ffad44-471b-4a66-b215-47ff62e49275", 00:22:34.961 "assigned_rate_limits": { 00:22:34.961 "rw_ios_per_sec": 0, 00:22:34.961 "rw_mbytes_per_sec": 0, 00:22:34.961 "r_mbytes_per_sec": 0, 00:22:34.961 "w_mbytes_per_sec": 0 00:22:34.961 }, 00:22:34.961 "claimed": true, 00:22:34.961 "claim_type": "exclusive_write", 00:22:34.961 "zoned": false, 00:22:34.961 "supported_io_types": { 00:22:34.961 "read": true, 00:22:34.961 "write": true, 00:22:34.961 "unmap": true, 00:22:34.961 "flush": true, 00:22:34.961 "reset": true, 00:22:34.961 "nvme_admin": false, 00:22:34.961 "nvme_io": false, 00:22:34.961 "nvme_io_md": false, 00:22:34.961 "write_zeroes": true, 00:22:34.961 "zcopy": true, 00:22:34.961 "get_zone_info": false, 00:22:34.961 "zone_management": false, 00:22:34.961 "zone_append": false, 00:22:34.961 "compare": false, 00:22:34.961 "compare_and_write": false, 00:22:34.961 "abort": true, 00:22:34.961 "seek_hole": false, 00:22:34.961 "seek_data": false, 00:22:34.961 "copy": true, 00:22:34.961 "nvme_iov_md": false 00:22:34.961 }, 00:22:34.961 "memory_domains": [ 00:22:34.961 { 00:22:34.961 "dma_device_id": "system", 00:22:34.961 "dma_device_type": 1 00:22:34.961 }, 00:22:34.961 { 00:22:34.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.961 "dma_device_type": 2 00:22:34.961 } 00:22:34.961 ], 00:22:34.961 "driver_specific": {} 00:22:34.961 }' 00:22:34.961 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.961 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.961 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.961 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.961 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.221 06:17:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:35.480 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.480 "name": "BaseBdev4", 00:22:35.481 "aliases": [ 00:22:35.481 "e6606787-6a26-4549-a94a-e611cf69a64c" 00:22:35.481 ], 00:22:35.481 "product_name": "Malloc disk", 00:22:35.481 "block_size": 512, 00:22:35.481 "num_blocks": 65536, 00:22:35.481 "uuid": "e6606787-6a26-4549-a94a-e611cf69a64c", 00:22:35.481 "assigned_rate_limits": { 00:22:35.481 "rw_ios_per_sec": 0, 00:22:35.481 "rw_mbytes_per_sec": 0, 00:22:35.481 "r_mbytes_per_sec": 0, 00:22:35.481 "w_mbytes_per_sec": 0 00:22:35.481 }, 00:22:35.481 "claimed": true, 00:22:35.481 "claim_type": "exclusive_write", 00:22:35.481 "zoned": false, 00:22:35.481 "supported_io_types": { 00:22:35.481 "read": true, 00:22:35.481 "write": true, 00:22:35.481 "unmap": true, 00:22:35.481 "flush": true, 00:22:35.481 "reset": true, 00:22:35.481 "nvme_admin": false, 00:22:35.481 "nvme_io": false, 00:22:35.481 "nvme_io_md": false, 00:22:35.481 "write_zeroes": true, 00:22:35.481 "zcopy": true, 00:22:35.481 "get_zone_info": false, 00:22:35.481 "zone_management": false, 00:22:35.481 "zone_append": false, 00:22:35.481 "compare": false, 00:22:35.481 "compare_and_write": false, 00:22:35.481 "abort": true, 00:22:35.481 "seek_hole": false, 00:22:35.481 "seek_data": false, 00:22:35.481 "copy": true, 00:22:35.481 "nvme_iov_md": false 00:22:35.481 }, 00:22:35.481 "memory_domains": [ 00:22:35.481 { 00:22:35.481 "dma_device_id": "system", 00:22:35.481 "dma_device_type": 1 00:22:35.481 }, 00:22:35.481 { 00:22:35.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.481 "dma_device_type": 2 00:22:35.481 } 00:22:35.481 ], 00:22:35.481 "driver_specific": {} 00:22:35.481 }' 00:22:35.481 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.481 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.481 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.481 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.740 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:36.000 [2024-08-13 06:17:37.666421] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:36.000 [2024-08-13 06:17:37.666451] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.000 [2024-08-13 06:17:37.666527] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.000 [2024-08-13 06:17:37.666772] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.000 [2024-08-13 06:17:37.666792] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 101008 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 101008 ']' 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 101008 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101008 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:36.000 killing process with pid 101008 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101008' 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 101008 00:22:36.000 [2024-08-13 06:17:37.726571] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.000 06:17:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 101008 00:22:36.000 [2024-08-13 06:17:37.767277] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:36.261 06:17:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:36.261 00:22:36.261 real 0m27.419s 00:22:36.261 user 0m50.725s 00:22:36.261 sys 0m4.515s 00:22:36.261 06:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:36.261 06:17:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.261 ************************************ 00:22:36.261 END TEST raid5f_state_function_test 00:22:36.261 ************************************ 00:22:36.521 06:17:38 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:22:36.521 06:17:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:36.521 06:17:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:36.521 06:17:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.521 ************************************ 00:22:36.521 START TEST raid5f_state_function_test_sb 00:22:36.521 ************************************ 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 true 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=102002 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:36.521 Process raid pid: 102002 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 102002' 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 102002 /var/tmp/spdk-raid.sock 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 102002 ']' 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.521 06:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.521 [2024-08-13 06:17:38.195488] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:22:36.521 [2024-08-13 06:17:38.195618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.781 [2024-08-13 06:17:38.340708] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.781 [2024-08-13 06:17:38.386216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.781 [2024-08-13 06:17:38.428784] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.781 [2024-08-13 06:17:38.428823] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:37.360 06:17:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:37.360 06:17:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:22:37.360 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:37.647 [2024-08-13 06:17:39.188495] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:37.647 [2024-08-13 06:17:39.188548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:37.647 [2024-08-13 06:17:39.188560] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.647 [2024-08-13 06:17:39.188567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.647 [2024-08-13 06:17:39.188576] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:37.647 [2024-08-13 06:17:39.188583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:37.647 [2024-08-13 06:17:39.188592] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:37.647 [2024-08-13 06:17:39.188598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.647 "name": "Existed_Raid", 00:22:37.647 "uuid": "711e1bf2-1a13-469b-8dad-8e289db4c771", 00:22:37.647 "strip_size_kb": 64, 00:22:37.647 "state": "configuring", 00:22:37.647 "raid_level": "raid5f", 00:22:37.647 "superblock": true, 00:22:37.647 "num_base_bdevs": 4, 00:22:37.647 "num_base_bdevs_discovered": 0, 00:22:37.647 "num_base_bdevs_operational": 4, 00:22:37.647 "base_bdevs_list": [ 00:22:37.647 { 00:22:37.647 "name": "BaseBdev1", 00:22:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.647 "is_configured": false, 00:22:37.647 "data_offset": 0, 00:22:37.647 "data_size": 0 00:22:37.647 }, 00:22:37.647 { 00:22:37.647 "name": "BaseBdev2", 00:22:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.647 "is_configured": false, 00:22:37.647 "data_offset": 0, 00:22:37.647 "data_size": 0 00:22:37.647 }, 00:22:37.647 { 00:22:37.647 "name": "BaseBdev3", 00:22:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.647 "is_configured": false, 00:22:37.647 "data_offset": 0, 00:22:37.647 "data_size": 0 00:22:37.647 }, 00:22:37.647 { 00:22:37.647 "name": "BaseBdev4", 00:22:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.647 "is_configured": false, 00:22:37.647 "data_offset": 0, 00:22:37.647 "data_size": 0 00:22:37.647 } 00:22:37.647 ] 00:22:37.647 }' 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.647 06:17:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.230 06:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:38.490 [2024-08-13 06:17:40.122706] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:38.490 [2024-08-13 06:17:40.122750] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:22:38.490 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:38.750 [2024-08-13 06:17:40.286462] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:38.750 [2024-08-13 06:17:40.286516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:38.750 [2024-08-13 06:17:40.286526] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:38.750 [2024-08-13 06:17:40.286534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:38.750 [2024-08-13 06:17:40.286541] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:38.750 [2024-08-13 06:17:40.286548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:38.750 [2024-08-13 06:17:40.286555] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:38.750 [2024-08-13 06:17:40.286562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:38.750 [2024-08-13 06:17:40.486829] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.750 BaseBdev1 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:38.750 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:39.010 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:39.270 [ 00:22:39.270 { 00:22:39.270 "name": "BaseBdev1", 00:22:39.270 "aliases": [ 00:22:39.270 "a2c16ee9-d93a-40a5-8592-fdc18d50d317" 00:22:39.270 ], 00:22:39.270 "product_name": "Malloc disk", 00:22:39.270 "block_size": 512, 00:22:39.270 "num_blocks": 65536, 00:22:39.270 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:39.270 "assigned_rate_limits": { 00:22:39.270 "rw_ios_per_sec": 0, 00:22:39.270 "rw_mbytes_per_sec": 0, 00:22:39.270 "r_mbytes_per_sec": 0, 00:22:39.270 "w_mbytes_per_sec": 0 00:22:39.270 }, 00:22:39.270 "claimed": true, 00:22:39.270 "claim_type": "exclusive_write", 00:22:39.270 "zoned": false, 00:22:39.270 "supported_io_types": { 00:22:39.270 "read": true, 00:22:39.270 "write": true, 00:22:39.270 "unmap": true, 00:22:39.270 "flush": true, 00:22:39.270 "reset": true, 00:22:39.270 "nvme_admin": false, 00:22:39.270 "nvme_io": false, 00:22:39.270 "nvme_io_md": false, 00:22:39.270 "write_zeroes": true, 00:22:39.270 "zcopy": true, 00:22:39.270 "get_zone_info": false, 00:22:39.270 "zone_management": false, 00:22:39.270 "zone_append": false, 00:22:39.270 "compare": false, 00:22:39.270 "compare_and_write": false, 00:22:39.270 "abort": true, 00:22:39.270 "seek_hole": false, 00:22:39.270 "seek_data": false, 00:22:39.270 "copy": true, 00:22:39.270 "nvme_iov_md": false 00:22:39.270 }, 00:22:39.270 "memory_domains": [ 00:22:39.270 { 00:22:39.270 "dma_device_id": "system", 00:22:39.270 "dma_device_type": 1 00:22:39.270 }, 00:22:39.270 { 00:22:39.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.270 "dma_device_type": 2 00:22:39.270 } 00:22:39.270 ], 00:22:39.270 "driver_specific": {} 00:22:39.270 } 00:22:39.270 ] 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.270 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.271 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.271 06:17:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.531 06:17:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.531 "name": "Existed_Raid", 00:22:39.531 "uuid": "ce9fcf07-81fc-4660-bea0-0b06f34555af", 00:22:39.531 "strip_size_kb": 64, 00:22:39.531 "state": "configuring", 00:22:39.531 "raid_level": "raid5f", 00:22:39.531 "superblock": true, 00:22:39.531 "num_base_bdevs": 4, 00:22:39.531 "num_base_bdevs_discovered": 1, 00:22:39.531 "num_base_bdevs_operational": 4, 00:22:39.531 "base_bdevs_list": [ 00:22:39.531 { 00:22:39.531 "name": "BaseBdev1", 00:22:39.531 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:39.531 "is_configured": true, 00:22:39.531 "data_offset": 2048, 00:22:39.531 "data_size": 63488 00:22:39.531 }, 00:22:39.531 { 00:22:39.531 "name": "BaseBdev2", 00:22:39.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.531 "is_configured": false, 00:22:39.531 "data_offset": 0, 00:22:39.531 "data_size": 0 00:22:39.531 }, 00:22:39.531 { 00:22:39.531 "name": "BaseBdev3", 00:22:39.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.531 "is_configured": false, 00:22:39.531 "data_offset": 0, 00:22:39.531 "data_size": 0 00:22:39.531 }, 00:22:39.531 { 00:22:39.531 "name": "BaseBdev4", 00:22:39.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.531 "is_configured": false, 00:22:39.531 "data_offset": 0, 00:22:39.531 "data_size": 0 00:22:39.531 } 00:22:39.531 ] 00:22:39.531 }' 00:22:39.531 06:17:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.531 06:17:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 06:17:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:40.100 [2024-08-13 06:17:41.804683] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:40.100 [2024-08-13 06:17:41.804746] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:22:40.100 06:17:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:40.360 [2024-08-13 06:17:42.032359] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.360 [2024-08-13 06:17:42.034059] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:40.360 [2024-08-13 06:17:42.034100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:40.360 [2024-08-13 06:17:42.034114] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:40.360 [2024-08-13 06:17:42.034121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:40.360 [2024-08-13 06:17:42.034129] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:40.360 [2024-08-13 06:17:42.034143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.360 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.619 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.619 "name": "Existed_Raid", 00:22:40.619 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:40.619 "strip_size_kb": 64, 00:22:40.619 "state": "configuring", 00:22:40.619 "raid_level": "raid5f", 00:22:40.619 "superblock": true, 00:22:40.619 "num_base_bdevs": 4, 00:22:40.619 "num_base_bdevs_discovered": 1, 00:22:40.619 "num_base_bdevs_operational": 4, 00:22:40.619 "base_bdevs_list": [ 00:22:40.619 { 00:22:40.619 "name": "BaseBdev1", 00:22:40.619 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:40.619 "is_configured": true, 00:22:40.619 "data_offset": 2048, 00:22:40.619 "data_size": 63488 00:22:40.619 }, 00:22:40.619 { 00:22:40.619 "name": "BaseBdev2", 00:22:40.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.619 "is_configured": false, 00:22:40.619 "data_offset": 0, 00:22:40.619 "data_size": 0 00:22:40.619 }, 00:22:40.619 { 00:22:40.619 "name": "BaseBdev3", 00:22:40.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.619 "is_configured": false, 00:22:40.619 "data_offset": 0, 00:22:40.619 "data_size": 0 00:22:40.619 }, 00:22:40.619 { 00:22:40.619 "name": "BaseBdev4", 00:22:40.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.619 "is_configured": false, 00:22:40.619 "data_offset": 0, 00:22:40.619 "data_size": 0 00:22:40.619 } 00:22:40.619 ] 00:22:40.619 }' 00:22:40.619 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.619 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:41.189 [2024-08-13 06:17:42.925486] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:41.189 BaseBdev2 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:41.189 06:17:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:41.448 06:17:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:41.708 [ 00:22:41.708 { 00:22:41.708 "name": "BaseBdev2", 00:22:41.708 "aliases": [ 00:22:41.708 "daf947fd-a9a0-4fda-b28a-2a513030b2f0" 00:22:41.708 ], 00:22:41.708 "product_name": "Malloc disk", 00:22:41.708 "block_size": 512, 00:22:41.708 "num_blocks": 65536, 00:22:41.708 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:41.708 "assigned_rate_limits": { 00:22:41.708 "rw_ios_per_sec": 0, 00:22:41.708 "rw_mbytes_per_sec": 0, 00:22:41.708 "r_mbytes_per_sec": 0, 00:22:41.708 "w_mbytes_per_sec": 0 00:22:41.708 }, 00:22:41.708 "claimed": true, 00:22:41.708 "claim_type": "exclusive_write", 00:22:41.708 "zoned": false, 00:22:41.708 "supported_io_types": { 00:22:41.708 "read": true, 00:22:41.708 "write": true, 00:22:41.708 "unmap": true, 00:22:41.708 "flush": true, 00:22:41.708 "reset": true, 00:22:41.708 "nvme_admin": false, 00:22:41.708 "nvme_io": false, 00:22:41.708 "nvme_io_md": false, 00:22:41.708 "write_zeroes": true, 00:22:41.708 "zcopy": true, 00:22:41.708 "get_zone_info": false, 00:22:41.708 "zone_management": false, 00:22:41.708 "zone_append": false, 00:22:41.708 "compare": false, 00:22:41.708 "compare_and_write": false, 00:22:41.708 "abort": true, 00:22:41.708 "seek_hole": false, 00:22:41.708 "seek_data": false, 00:22:41.708 "copy": true, 00:22:41.708 "nvme_iov_md": false 00:22:41.708 }, 00:22:41.708 "memory_domains": [ 00:22:41.708 { 00:22:41.708 "dma_device_id": "system", 00:22:41.708 "dma_device_type": 1 00:22:41.708 }, 00:22:41.708 { 00:22:41.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.708 "dma_device_type": 2 00:22:41.708 } 00:22:41.708 ], 00:22:41.708 "driver_specific": {} 00:22:41.708 } 00:22:41.708 ] 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.708 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.968 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.968 "name": "Existed_Raid", 00:22:41.968 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:41.968 "strip_size_kb": 64, 00:22:41.968 "state": "configuring", 00:22:41.968 "raid_level": "raid5f", 00:22:41.968 "superblock": true, 00:22:41.968 "num_base_bdevs": 4, 00:22:41.968 "num_base_bdevs_discovered": 2, 00:22:41.968 "num_base_bdevs_operational": 4, 00:22:41.968 "base_bdevs_list": [ 00:22:41.968 { 00:22:41.968 "name": "BaseBdev1", 00:22:41.968 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:41.968 "is_configured": true, 00:22:41.968 "data_offset": 2048, 00:22:41.968 "data_size": 63488 00:22:41.968 }, 00:22:41.968 { 00:22:41.968 "name": "BaseBdev2", 00:22:41.968 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:41.968 "is_configured": true, 00:22:41.968 "data_offset": 2048, 00:22:41.968 "data_size": 63488 00:22:41.968 }, 00:22:41.968 { 00:22:41.968 "name": "BaseBdev3", 00:22:41.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.968 "is_configured": false, 00:22:41.968 "data_offset": 0, 00:22:41.968 "data_size": 0 00:22:41.968 }, 00:22:41.968 { 00:22:41.968 "name": "BaseBdev4", 00:22:41.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.968 "is_configured": false, 00:22:41.968 "data_offset": 0, 00:22:41.968 "data_size": 0 00:22:41.968 } 00:22:41.968 ] 00:22:41.968 }' 00:22:41.968 06:17:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.968 06:17:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:42.537 [2024-08-13 06:17:44.266324] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:42.537 BaseBdev3 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:42.537 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:42.797 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:43.056 [ 00:22:43.056 { 00:22:43.056 "name": "BaseBdev3", 00:22:43.056 "aliases": [ 00:22:43.056 "2b91fb2e-52bb-47f6-875c-990890b52241" 00:22:43.056 ], 00:22:43.056 "product_name": "Malloc disk", 00:22:43.056 "block_size": 512, 00:22:43.056 "num_blocks": 65536, 00:22:43.056 "uuid": "2b91fb2e-52bb-47f6-875c-990890b52241", 00:22:43.056 "assigned_rate_limits": { 00:22:43.056 "rw_ios_per_sec": 0, 00:22:43.056 "rw_mbytes_per_sec": 0, 00:22:43.056 "r_mbytes_per_sec": 0, 00:22:43.056 "w_mbytes_per_sec": 0 00:22:43.056 }, 00:22:43.056 "claimed": true, 00:22:43.056 "claim_type": "exclusive_write", 00:22:43.056 "zoned": false, 00:22:43.056 "supported_io_types": { 00:22:43.056 "read": true, 00:22:43.056 "write": true, 00:22:43.056 "unmap": true, 00:22:43.056 "flush": true, 00:22:43.056 "reset": true, 00:22:43.056 "nvme_admin": false, 00:22:43.056 "nvme_io": false, 00:22:43.056 "nvme_io_md": false, 00:22:43.056 "write_zeroes": true, 00:22:43.056 "zcopy": true, 00:22:43.056 "get_zone_info": false, 00:22:43.056 "zone_management": false, 00:22:43.056 "zone_append": false, 00:22:43.056 "compare": false, 00:22:43.056 "compare_and_write": false, 00:22:43.056 "abort": true, 00:22:43.056 "seek_hole": false, 00:22:43.056 "seek_data": false, 00:22:43.056 "copy": true, 00:22:43.057 "nvme_iov_md": false 00:22:43.057 }, 00:22:43.057 "memory_domains": [ 00:22:43.057 { 00:22:43.057 "dma_device_id": "system", 00:22:43.057 "dma_device_type": 1 00:22:43.057 }, 00:22:43.057 { 00:22:43.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.057 "dma_device_type": 2 00:22:43.057 } 00:22:43.057 ], 00:22:43.057 "driver_specific": {} 00:22:43.057 } 00:22:43.057 ] 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.057 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.316 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.316 "name": "Existed_Raid", 00:22:43.316 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:43.316 "strip_size_kb": 64, 00:22:43.316 "state": "configuring", 00:22:43.316 "raid_level": "raid5f", 00:22:43.316 "superblock": true, 00:22:43.316 "num_base_bdevs": 4, 00:22:43.316 "num_base_bdevs_discovered": 3, 00:22:43.316 "num_base_bdevs_operational": 4, 00:22:43.316 "base_bdevs_list": [ 00:22:43.316 { 00:22:43.316 "name": "BaseBdev1", 00:22:43.316 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:43.316 "is_configured": true, 00:22:43.316 "data_offset": 2048, 00:22:43.316 "data_size": 63488 00:22:43.316 }, 00:22:43.316 { 00:22:43.316 "name": "BaseBdev2", 00:22:43.316 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:43.316 "is_configured": true, 00:22:43.316 "data_offset": 2048, 00:22:43.316 "data_size": 63488 00:22:43.316 }, 00:22:43.316 { 00:22:43.316 "name": "BaseBdev3", 00:22:43.316 "uuid": "2b91fb2e-52bb-47f6-875c-990890b52241", 00:22:43.316 "is_configured": true, 00:22:43.316 "data_offset": 2048, 00:22:43.316 "data_size": 63488 00:22:43.316 }, 00:22:43.316 { 00:22:43.316 "name": "BaseBdev4", 00:22:43.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.316 "is_configured": false, 00:22:43.316 "data_offset": 0, 00:22:43.317 "data_size": 0 00:22:43.317 } 00:22:43.317 ] 00:22:43.317 }' 00:22:43.317 06:17:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.317 06:17:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:43.886 [2024-08-13 06:17:45.623176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:43.886 [2024-08-13 06:17:45.623384] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:22:43.886 [2024-08-13 06:17:45.623402] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:43.886 [2024-08-13 06:17:45.623678] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:22:43.886 [2024-08-13 06:17:45.624170] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:22:43.886 [2024-08-13 06:17:45.624201] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:22:43.886 BaseBdev4 00:22:43.886 [2024-08-13 06:17:45.624311] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:43.886 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.146 06:17:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:44.406 [ 00:22:44.406 { 00:22:44.406 "name": "BaseBdev4", 00:22:44.406 "aliases": [ 00:22:44.406 "ef846ab3-93bc-4d5b-9497-d1230db52f5e" 00:22:44.406 ], 00:22:44.406 "product_name": "Malloc disk", 00:22:44.406 "block_size": 512, 00:22:44.406 "num_blocks": 65536, 00:22:44.406 "uuid": "ef846ab3-93bc-4d5b-9497-d1230db52f5e", 00:22:44.406 "assigned_rate_limits": { 00:22:44.406 "rw_ios_per_sec": 0, 00:22:44.406 "rw_mbytes_per_sec": 0, 00:22:44.406 "r_mbytes_per_sec": 0, 00:22:44.406 "w_mbytes_per_sec": 0 00:22:44.406 }, 00:22:44.406 "claimed": true, 00:22:44.406 "claim_type": "exclusive_write", 00:22:44.406 "zoned": false, 00:22:44.406 "supported_io_types": { 00:22:44.406 "read": true, 00:22:44.406 "write": true, 00:22:44.406 "unmap": true, 00:22:44.406 "flush": true, 00:22:44.406 "reset": true, 00:22:44.406 "nvme_admin": false, 00:22:44.406 "nvme_io": false, 00:22:44.406 "nvme_io_md": false, 00:22:44.406 "write_zeroes": true, 00:22:44.406 "zcopy": true, 00:22:44.406 "get_zone_info": false, 00:22:44.406 "zone_management": false, 00:22:44.406 "zone_append": false, 00:22:44.406 "compare": false, 00:22:44.406 "compare_and_write": false, 00:22:44.406 "abort": true, 00:22:44.406 "seek_hole": false, 00:22:44.406 "seek_data": false, 00:22:44.406 "copy": true, 00:22:44.406 "nvme_iov_md": false 00:22:44.406 }, 00:22:44.406 "memory_domains": [ 00:22:44.406 { 00:22:44.406 "dma_device_id": "system", 00:22:44.406 "dma_device_type": 1 00:22:44.406 }, 00:22:44.406 { 00:22:44.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.406 "dma_device_type": 2 00:22:44.406 } 00:22:44.406 ], 00:22:44.406 "driver_specific": {} 00:22:44.406 } 00:22:44.406 ] 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.406 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.666 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.666 "name": "Existed_Raid", 00:22:44.666 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:44.666 "strip_size_kb": 64, 00:22:44.666 "state": "online", 00:22:44.666 "raid_level": "raid5f", 00:22:44.666 "superblock": true, 00:22:44.666 "num_base_bdevs": 4, 00:22:44.666 "num_base_bdevs_discovered": 4, 00:22:44.666 "num_base_bdevs_operational": 4, 00:22:44.666 "base_bdevs_list": [ 00:22:44.666 { 00:22:44.666 "name": "BaseBdev1", 00:22:44.666 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:44.666 "is_configured": true, 00:22:44.666 "data_offset": 2048, 00:22:44.666 "data_size": 63488 00:22:44.666 }, 00:22:44.666 { 00:22:44.666 "name": "BaseBdev2", 00:22:44.666 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:44.666 "is_configured": true, 00:22:44.666 "data_offset": 2048, 00:22:44.666 "data_size": 63488 00:22:44.666 }, 00:22:44.666 { 00:22:44.666 "name": "BaseBdev3", 00:22:44.666 "uuid": "2b91fb2e-52bb-47f6-875c-990890b52241", 00:22:44.666 "is_configured": true, 00:22:44.666 "data_offset": 2048, 00:22:44.666 "data_size": 63488 00:22:44.666 }, 00:22:44.666 { 00:22:44.666 "name": "BaseBdev4", 00:22:44.666 "uuid": "ef846ab3-93bc-4d5b-9497-d1230db52f5e", 00:22:44.666 "is_configured": true, 00:22:44.666 "data_offset": 2048, 00:22:44.666 "data_size": 63488 00:22:44.666 } 00:22:44.666 ] 00:22:44.666 }' 00:22:44.666 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.666 06:17:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:45.236 [2024-08-13 06:17:46.897250] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:45.236 "name": "Existed_Raid", 00:22:45.236 "aliases": [ 00:22:45.236 "45e557eb-aabf-40d8-8463-de9d5845ff5b" 00:22:45.236 ], 00:22:45.236 "product_name": "Raid Volume", 00:22:45.236 "block_size": 512, 00:22:45.236 "num_blocks": 190464, 00:22:45.236 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:45.236 "assigned_rate_limits": { 00:22:45.236 "rw_ios_per_sec": 0, 00:22:45.236 "rw_mbytes_per_sec": 0, 00:22:45.236 "r_mbytes_per_sec": 0, 00:22:45.236 "w_mbytes_per_sec": 0 00:22:45.236 }, 00:22:45.236 "claimed": false, 00:22:45.236 "zoned": false, 00:22:45.236 "supported_io_types": { 00:22:45.236 "read": true, 00:22:45.236 "write": true, 00:22:45.236 "unmap": false, 00:22:45.236 "flush": false, 00:22:45.236 "reset": true, 00:22:45.236 "nvme_admin": false, 00:22:45.236 "nvme_io": false, 00:22:45.236 "nvme_io_md": false, 00:22:45.236 "write_zeroes": true, 00:22:45.236 "zcopy": false, 00:22:45.236 "get_zone_info": false, 00:22:45.236 "zone_management": false, 00:22:45.236 "zone_append": false, 00:22:45.236 "compare": false, 00:22:45.236 "compare_and_write": false, 00:22:45.236 "abort": false, 00:22:45.236 "seek_hole": false, 00:22:45.236 "seek_data": false, 00:22:45.236 "copy": false, 00:22:45.236 "nvme_iov_md": false 00:22:45.236 }, 00:22:45.236 "driver_specific": { 00:22:45.236 "raid": { 00:22:45.236 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:45.236 "strip_size_kb": 64, 00:22:45.236 "state": "online", 00:22:45.236 "raid_level": "raid5f", 00:22:45.236 "superblock": true, 00:22:45.236 "num_base_bdevs": 4, 00:22:45.236 "num_base_bdevs_discovered": 4, 00:22:45.236 "num_base_bdevs_operational": 4, 00:22:45.236 "base_bdevs_list": [ 00:22:45.236 { 00:22:45.236 "name": "BaseBdev1", 00:22:45.236 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:45.236 "is_configured": true, 00:22:45.236 "data_offset": 2048, 00:22:45.236 "data_size": 63488 00:22:45.236 }, 00:22:45.236 { 00:22:45.236 "name": "BaseBdev2", 00:22:45.236 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:45.236 "is_configured": true, 00:22:45.236 "data_offset": 2048, 00:22:45.236 "data_size": 63488 00:22:45.236 }, 00:22:45.236 { 00:22:45.236 "name": "BaseBdev3", 00:22:45.236 "uuid": "2b91fb2e-52bb-47f6-875c-990890b52241", 00:22:45.236 "is_configured": true, 00:22:45.236 "data_offset": 2048, 00:22:45.236 "data_size": 63488 00:22:45.236 }, 00:22:45.236 { 00:22:45.236 "name": "BaseBdev4", 00:22:45.236 "uuid": "ef846ab3-93bc-4d5b-9497-d1230db52f5e", 00:22:45.236 "is_configured": true, 00:22:45.236 "data_offset": 2048, 00:22:45.236 "data_size": 63488 00:22:45.236 } 00:22:45.236 ] 00:22:45.236 } 00:22:45.236 } 00:22:45.236 }' 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:45.236 BaseBdev2 00:22:45.236 BaseBdev3 00:22:45.236 BaseBdev4' 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:45.236 06:17:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:45.496 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:45.496 "name": "BaseBdev1", 00:22:45.496 "aliases": [ 00:22:45.496 "a2c16ee9-d93a-40a5-8592-fdc18d50d317" 00:22:45.496 ], 00:22:45.496 "product_name": "Malloc disk", 00:22:45.496 "block_size": 512, 00:22:45.496 "num_blocks": 65536, 00:22:45.496 "uuid": "a2c16ee9-d93a-40a5-8592-fdc18d50d317", 00:22:45.496 "assigned_rate_limits": { 00:22:45.496 "rw_ios_per_sec": 0, 00:22:45.496 "rw_mbytes_per_sec": 0, 00:22:45.496 "r_mbytes_per_sec": 0, 00:22:45.496 "w_mbytes_per_sec": 0 00:22:45.496 }, 00:22:45.496 "claimed": true, 00:22:45.496 "claim_type": "exclusive_write", 00:22:45.496 "zoned": false, 00:22:45.496 "supported_io_types": { 00:22:45.496 "read": true, 00:22:45.496 "write": true, 00:22:45.496 "unmap": true, 00:22:45.496 "flush": true, 00:22:45.496 "reset": true, 00:22:45.496 "nvme_admin": false, 00:22:45.496 "nvme_io": false, 00:22:45.496 "nvme_io_md": false, 00:22:45.496 "write_zeroes": true, 00:22:45.496 "zcopy": true, 00:22:45.496 "get_zone_info": false, 00:22:45.496 "zone_management": false, 00:22:45.496 "zone_append": false, 00:22:45.496 "compare": false, 00:22:45.496 "compare_and_write": false, 00:22:45.496 "abort": true, 00:22:45.496 "seek_hole": false, 00:22:45.496 "seek_data": false, 00:22:45.496 "copy": true, 00:22:45.496 "nvme_iov_md": false 00:22:45.496 }, 00:22:45.496 "memory_domains": [ 00:22:45.496 { 00:22:45.496 "dma_device_id": "system", 00:22:45.496 "dma_device_type": 1 00:22:45.496 }, 00:22:45.496 { 00:22:45.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.497 "dma_device_type": 2 00:22:45.497 } 00:22:45.497 ], 00:22:45.497 "driver_specific": {} 00:22:45.497 }' 00:22:45.497 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.497 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.497 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:45.497 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.756 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.016 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:46.016 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.016 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:46.016 "name": "BaseBdev2", 00:22:46.016 "aliases": [ 00:22:46.016 "daf947fd-a9a0-4fda-b28a-2a513030b2f0" 00:22:46.016 ], 00:22:46.016 "product_name": "Malloc disk", 00:22:46.016 "block_size": 512, 00:22:46.016 "num_blocks": 65536, 00:22:46.016 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:46.016 "assigned_rate_limits": { 00:22:46.016 "rw_ios_per_sec": 0, 00:22:46.016 "rw_mbytes_per_sec": 0, 00:22:46.016 "r_mbytes_per_sec": 0, 00:22:46.016 "w_mbytes_per_sec": 0 00:22:46.016 }, 00:22:46.016 "claimed": true, 00:22:46.016 "claim_type": "exclusive_write", 00:22:46.016 "zoned": false, 00:22:46.016 "supported_io_types": { 00:22:46.016 "read": true, 00:22:46.016 "write": true, 00:22:46.016 "unmap": true, 00:22:46.016 "flush": true, 00:22:46.016 "reset": true, 00:22:46.016 "nvme_admin": false, 00:22:46.016 "nvme_io": false, 00:22:46.016 "nvme_io_md": false, 00:22:46.016 "write_zeroes": true, 00:22:46.016 "zcopy": true, 00:22:46.016 "get_zone_info": false, 00:22:46.016 "zone_management": false, 00:22:46.016 "zone_append": false, 00:22:46.016 "compare": false, 00:22:46.016 "compare_and_write": false, 00:22:46.016 "abort": true, 00:22:46.016 "seek_hole": false, 00:22:46.016 "seek_data": false, 00:22:46.016 "copy": true, 00:22:46.016 "nvme_iov_md": false 00:22:46.016 }, 00:22:46.016 "memory_domains": [ 00:22:46.016 { 00:22:46.016 "dma_device_id": "system", 00:22:46.016 "dma_device_type": 1 00:22:46.016 }, 00:22:46.016 { 00:22:46.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.016 "dma_device_type": 2 00:22:46.016 } 00:22:46.016 ], 00:22:46.016 "driver_specific": {} 00:22:46.016 }' 00:22:46.016 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.016 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.276 06:17:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.276 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:46.276 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.276 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:46.276 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.535 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:46.535 "name": "BaseBdev3", 00:22:46.535 "aliases": [ 00:22:46.535 "2b91fb2e-52bb-47f6-875c-990890b52241" 00:22:46.535 ], 00:22:46.535 "product_name": "Malloc disk", 00:22:46.535 "block_size": 512, 00:22:46.535 "num_blocks": 65536, 00:22:46.535 "uuid": "2b91fb2e-52bb-47f6-875c-990890b52241", 00:22:46.535 "assigned_rate_limits": { 00:22:46.535 "rw_ios_per_sec": 0, 00:22:46.536 "rw_mbytes_per_sec": 0, 00:22:46.536 "r_mbytes_per_sec": 0, 00:22:46.536 "w_mbytes_per_sec": 0 00:22:46.536 }, 00:22:46.536 "claimed": true, 00:22:46.536 "claim_type": "exclusive_write", 00:22:46.536 "zoned": false, 00:22:46.536 "supported_io_types": { 00:22:46.536 "read": true, 00:22:46.536 "write": true, 00:22:46.536 "unmap": true, 00:22:46.536 "flush": true, 00:22:46.536 "reset": true, 00:22:46.536 "nvme_admin": false, 00:22:46.536 "nvme_io": false, 00:22:46.536 "nvme_io_md": false, 00:22:46.536 "write_zeroes": true, 00:22:46.536 "zcopy": true, 00:22:46.536 "get_zone_info": false, 00:22:46.536 "zone_management": false, 00:22:46.536 "zone_append": false, 00:22:46.536 "compare": false, 00:22:46.536 "compare_and_write": false, 00:22:46.536 "abort": true, 00:22:46.536 "seek_hole": false, 00:22:46.536 "seek_data": false, 00:22:46.536 "copy": true, 00:22:46.536 "nvme_iov_md": false 00:22:46.536 }, 00:22:46.536 "memory_domains": [ 00:22:46.536 { 00:22:46.536 "dma_device_id": "system", 00:22:46.536 "dma_device_type": 1 00:22:46.536 }, 00:22:46.536 { 00:22:46.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.536 "dma_device_type": 2 00:22:46.536 } 00:22:46.536 ], 00:22:46.536 "driver_specific": {} 00:22:46.536 }' 00:22:46.536 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.536 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.536 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:46.536 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.536 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:46.795 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:47.055 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:47.055 "name": "BaseBdev4", 00:22:47.055 "aliases": [ 00:22:47.055 "ef846ab3-93bc-4d5b-9497-d1230db52f5e" 00:22:47.055 ], 00:22:47.055 "product_name": "Malloc disk", 00:22:47.055 "block_size": 512, 00:22:47.055 "num_blocks": 65536, 00:22:47.055 "uuid": "ef846ab3-93bc-4d5b-9497-d1230db52f5e", 00:22:47.055 "assigned_rate_limits": { 00:22:47.055 "rw_ios_per_sec": 0, 00:22:47.055 "rw_mbytes_per_sec": 0, 00:22:47.055 "r_mbytes_per_sec": 0, 00:22:47.055 "w_mbytes_per_sec": 0 00:22:47.055 }, 00:22:47.055 "claimed": true, 00:22:47.055 "claim_type": "exclusive_write", 00:22:47.055 "zoned": false, 00:22:47.055 "supported_io_types": { 00:22:47.055 "read": true, 00:22:47.055 "write": true, 00:22:47.055 "unmap": true, 00:22:47.055 "flush": true, 00:22:47.055 "reset": true, 00:22:47.055 "nvme_admin": false, 00:22:47.055 "nvme_io": false, 00:22:47.055 "nvme_io_md": false, 00:22:47.055 "write_zeroes": true, 00:22:47.055 "zcopy": true, 00:22:47.055 "get_zone_info": false, 00:22:47.055 "zone_management": false, 00:22:47.055 "zone_append": false, 00:22:47.055 "compare": false, 00:22:47.055 "compare_and_write": false, 00:22:47.055 "abort": true, 00:22:47.055 "seek_hole": false, 00:22:47.055 "seek_data": false, 00:22:47.055 "copy": true, 00:22:47.055 "nvme_iov_md": false 00:22:47.055 }, 00:22:47.055 "memory_domains": [ 00:22:47.055 { 00:22:47.055 "dma_device_id": "system", 00:22:47.055 "dma_device_type": 1 00:22:47.055 }, 00:22:47.055 { 00:22:47.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.055 "dma_device_type": 2 00:22:47.055 } 00:22:47.055 ], 00:22:47.055 "driver_specific": {} 00:22:47.055 }' 00:22:47.055 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:47.055 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:47.055 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:47.056 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:47.056 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:47.315 06:17:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:47.575 [2024-08-13 06:17:49.161356] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:47.575 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.576 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.835 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.835 "name": "Existed_Raid", 00:22:47.835 "uuid": "45e557eb-aabf-40d8-8463-de9d5845ff5b", 00:22:47.835 "strip_size_kb": 64, 00:22:47.835 "state": "online", 00:22:47.835 "raid_level": "raid5f", 00:22:47.835 "superblock": true, 00:22:47.835 "num_base_bdevs": 4, 00:22:47.835 "num_base_bdevs_discovered": 3, 00:22:47.835 "num_base_bdevs_operational": 3, 00:22:47.835 "base_bdevs_list": [ 00:22:47.835 { 00:22:47.835 "name": null, 00:22:47.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.835 "is_configured": false, 00:22:47.835 "data_offset": 2048, 00:22:47.835 "data_size": 63488 00:22:47.835 }, 00:22:47.835 { 00:22:47.835 "name": "BaseBdev2", 00:22:47.835 "uuid": "daf947fd-a9a0-4fda-b28a-2a513030b2f0", 00:22:47.835 "is_configured": true, 00:22:47.835 "data_offset": 2048, 00:22:47.835 "data_size": 63488 00:22:47.835 }, 00:22:47.835 { 00:22:47.835 "name": "BaseBdev3", 00:22:47.835 "uuid": "2b91fb2e-52bb-47f6-875c-990890b52241", 00:22:47.835 "is_configured": true, 00:22:47.835 "data_offset": 2048, 00:22:47.835 "data_size": 63488 00:22:47.835 }, 00:22:47.835 { 00:22:47.835 "name": "BaseBdev4", 00:22:47.835 "uuid": "ef846ab3-93bc-4d5b-9497-d1230db52f5e", 00:22:47.835 "is_configured": true, 00:22:47.835 "data_offset": 2048, 00:22:47.835 "data_size": 63488 00:22:47.835 } 00:22:47.835 ] 00:22:47.835 }' 00:22:47.835 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.835 06:17:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:48.403 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:48.403 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.403 06:17:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:48.403 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:48.403 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:48.403 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:48.663 [2024-08-13 06:17:50.286694] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:48.663 [2024-08-13 06:17:50.286848] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.663 [2024-08-13 06:17:50.297712] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.663 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:48.663 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:48.663 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.663 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:48.923 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:48.923 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:48.923 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:48.923 [2024-08-13 06:17:50.693115] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:49.182 06:17:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:49.442 [2024-08-13 06:17:51.119327] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:49.442 [2024-08-13 06:17:51.119385] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:22:49.442 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:49.442 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:49.442 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.442 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:49.702 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:49.702 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:49.702 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:49.702 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:49.702 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:49.702 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.962 BaseBdev2 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:49.962 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:50.222 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:50.222 [ 00:22:50.222 { 00:22:50.222 "name": "BaseBdev2", 00:22:50.222 "aliases": [ 00:22:50.222 "9aac321f-2c15-4915-871e-b351a6b8aef9" 00:22:50.222 ], 00:22:50.222 "product_name": "Malloc disk", 00:22:50.222 "block_size": 512, 00:22:50.222 "num_blocks": 65536, 00:22:50.222 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:50.222 "assigned_rate_limits": { 00:22:50.222 "rw_ios_per_sec": 0, 00:22:50.222 "rw_mbytes_per_sec": 0, 00:22:50.222 "r_mbytes_per_sec": 0, 00:22:50.222 "w_mbytes_per_sec": 0 00:22:50.222 }, 00:22:50.222 "claimed": false, 00:22:50.222 "zoned": false, 00:22:50.222 "supported_io_types": { 00:22:50.222 "read": true, 00:22:50.222 "write": true, 00:22:50.222 "unmap": true, 00:22:50.222 "flush": true, 00:22:50.222 "reset": true, 00:22:50.222 "nvme_admin": false, 00:22:50.222 "nvme_io": false, 00:22:50.222 "nvme_io_md": false, 00:22:50.222 "write_zeroes": true, 00:22:50.222 "zcopy": true, 00:22:50.222 "get_zone_info": false, 00:22:50.222 "zone_management": false, 00:22:50.222 "zone_append": false, 00:22:50.222 "compare": false, 00:22:50.222 "compare_and_write": false, 00:22:50.222 "abort": true, 00:22:50.222 "seek_hole": false, 00:22:50.222 "seek_data": false, 00:22:50.222 "copy": true, 00:22:50.222 "nvme_iov_md": false 00:22:50.222 }, 00:22:50.222 "memory_domains": [ 00:22:50.222 { 00:22:50.222 "dma_device_id": "system", 00:22:50.222 "dma_device_type": 1 00:22:50.222 }, 00:22:50.222 { 00:22:50.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.222 "dma_device_type": 2 00:22:50.222 } 00:22:50.222 ], 00:22:50.222 "driver_specific": {} 00:22:50.222 } 00:22:50.222 ] 00:22:50.222 06:17:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:50.222 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:50.222 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:50.222 06:17:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:50.482 BaseBdev3 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:50.482 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:50.742 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:50.742 [ 00:22:50.742 { 00:22:50.742 "name": "BaseBdev3", 00:22:50.742 "aliases": [ 00:22:50.742 "6adac89d-ec7f-428e-814a-bc5416a44031" 00:22:50.742 ], 00:22:50.742 "product_name": "Malloc disk", 00:22:50.742 "block_size": 512, 00:22:50.742 "num_blocks": 65536, 00:22:50.742 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:50.742 "assigned_rate_limits": { 00:22:50.742 "rw_ios_per_sec": 0, 00:22:50.742 "rw_mbytes_per_sec": 0, 00:22:50.742 "r_mbytes_per_sec": 0, 00:22:50.742 "w_mbytes_per_sec": 0 00:22:50.742 }, 00:22:50.742 "claimed": false, 00:22:50.742 "zoned": false, 00:22:50.742 "supported_io_types": { 00:22:50.742 "read": true, 00:22:50.742 "write": true, 00:22:50.742 "unmap": true, 00:22:50.742 "flush": true, 00:22:50.742 "reset": true, 00:22:50.742 "nvme_admin": false, 00:22:50.742 "nvme_io": false, 00:22:50.742 "nvme_io_md": false, 00:22:50.742 "write_zeroes": true, 00:22:50.742 "zcopy": true, 00:22:50.742 "get_zone_info": false, 00:22:50.742 "zone_management": false, 00:22:50.742 "zone_append": false, 00:22:50.742 "compare": false, 00:22:50.742 "compare_and_write": false, 00:22:50.742 "abort": true, 00:22:50.742 "seek_hole": false, 00:22:50.742 "seek_data": false, 00:22:50.742 "copy": true, 00:22:50.742 "nvme_iov_md": false 00:22:50.742 }, 00:22:50.743 "memory_domains": [ 00:22:50.743 { 00:22:50.743 "dma_device_id": "system", 00:22:50.743 "dma_device_type": 1 00:22:50.743 }, 00:22:50.743 { 00:22:50.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.743 "dma_device_type": 2 00:22:50.743 } 00:22:50.743 ], 00:22:50.743 "driver_specific": {} 00:22:50.743 } 00:22:50.743 ] 00:22:50.743 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:50.743 06:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:50.743 06:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:50.743 06:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:51.002 BaseBdev4 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:51.002 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:51.262 06:17:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:51.522 [ 00:22:51.522 { 00:22:51.522 "name": "BaseBdev4", 00:22:51.522 "aliases": [ 00:22:51.522 "35452fda-fcf0-408f-868c-951f929f5067" 00:22:51.522 ], 00:22:51.522 "product_name": "Malloc disk", 00:22:51.522 "block_size": 512, 00:22:51.522 "num_blocks": 65536, 00:22:51.522 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:51.522 "assigned_rate_limits": { 00:22:51.522 "rw_ios_per_sec": 0, 00:22:51.522 "rw_mbytes_per_sec": 0, 00:22:51.522 "r_mbytes_per_sec": 0, 00:22:51.522 "w_mbytes_per_sec": 0 00:22:51.522 }, 00:22:51.522 "claimed": false, 00:22:51.522 "zoned": false, 00:22:51.522 "supported_io_types": { 00:22:51.522 "read": true, 00:22:51.522 "write": true, 00:22:51.522 "unmap": true, 00:22:51.522 "flush": true, 00:22:51.522 "reset": true, 00:22:51.522 "nvme_admin": false, 00:22:51.522 "nvme_io": false, 00:22:51.522 "nvme_io_md": false, 00:22:51.522 "write_zeroes": true, 00:22:51.522 "zcopy": true, 00:22:51.522 "get_zone_info": false, 00:22:51.522 "zone_management": false, 00:22:51.522 "zone_append": false, 00:22:51.522 "compare": false, 00:22:51.522 "compare_and_write": false, 00:22:51.522 "abort": true, 00:22:51.522 "seek_hole": false, 00:22:51.522 "seek_data": false, 00:22:51.522 "copy": true, 00:22:51.522 "nvme_iov_md": false 00:22:51.522 }, 00:22:51.522 "memory_domains": [ 00:22:51.522 { 00:22:51.522 "dma_device_id": "system", 00:22:51.522 "dma_device_type": 1 00:22:51.522 }, 00:22:51.522 { 00:22:51.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.522 "dma_device_type": 2 00:22:51.522 } 00:22:51.522 ], 00:22:51.522 "driver_specific": {} 00:22:51.522 } 00:22:51.522 ] 00:22:51.522 06:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:51.522 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:51.522 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:51.522 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:51.522 [2024-08-13 06:17:53.285021] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.522 [2024-08-13 06:17:53.285078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.522 [2024-08-13 06:17:53.285094] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:51.522 [2024-08-13 06:17:53.286679] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.522 [2024-08-13 06:17:53.286741] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.782 "name": "Existed_Raid", 00:22:51.782 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:51.782 "strip_size_kb": 64, 00:22:51.782 "state": "configuring", 00:22:51.782 "raid_level": "raid5f", 00:22:51.782 "superblock": true, 00:22:51.782 "num_base_bdevs": 4, 00:22:51.782 "num_base_bdevs_discovered": 3, 00:22:51.782 "num_base_bdevs_operational": 4, 00:22:51.782 "base_bdevs_list": [ 00:22:51.782 { 00:22:51.782 "name": "BaseBdev1", 00:22:51.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.782 "is_configured": false, 00:22:51.782 "data_offset": 0, 00:22:51.782 "data_size": 0 00:22:51.782 }, 00:22:51.782 { 00:22:51.782 "name": "BaseBdev2", 00:22:51.782 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:51.782 "is_configured": true, 00:22:51.782 "data_offset": 2048, 00:22:51.782 "data_size": 63488 00:22:51.782 }, 00:22:51.782 { 00:22:51.782 "name": "BaseBdev3", 00:22:51.782 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:51.782 "is_configured": true, 00:22:51.782 "data_offset": 2048, 00:22:51.782 "data_size": 63488 00:22:51.782 }, 00:22:51.782 { 00:22:51.782 "name": "BaseBdev4", 00:22:51.782 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:51.782 "is_configured": true, 00:22:51.782 "data_offset": 2048, 00:22:51.782 "data_size": 63488 00:22:51.782 } 00:22:51.782 ] 00:22:51.782 }' 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.782 06:17:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.351 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:52.610 [2024-08-13 06:17:54.219430] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.610 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.870 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.870 "name": "Existed_Raid", 00:22:52.870 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:52.870 "strip_size_kb": 64, 00:22:52.870 "state": "configuring", 00:22:52.870 "raid_level": "raid5f", 00:22:52.870 "superblock": true, 00:22:52.870 "num_base_bdevs": 4, 00:22:52.870 "num_base_bdevs_discovered": 2, 00:22:52.870 "num_base_bdevs_operational": 4, 00:22:52.870 "base_bdevs_list": [ 00:22:52.870 { 00:22:52.870 "name": "BaseBdev1", 00:22:52.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.870 "is_configured": false, 00:22:52.870 "data_offset": 0, 00:22:52.870 "data_size": 0 00:22:52.870 }, 00:22:52.870 { 00:22:52.870 "name": null, 00:22:52.870 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:52.870 "is_configured": false, 00:22:52.870 "data_offset": 2048, 00:22:52.870 "data_size": 63488 00:22:52.870 }, 00:22:52.870 { 00:22:52.870 "name": "BaseBdev3", 00:22:52.870 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:52.870 "is_configured": true, 00:22:52.870 "data_offset": 2048, 00:22:52.870 "data_size": 63488 00:22:52.870 }, 00:22:52.870 { 00:22:52.870 "name": "BaseBdev4", 00:22:52.870 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:52.870 "is_configured": true, 00:22:52.870 "data_offset": 2048, 00:22:52.870 "data_size": 63488 00:22:52.870 } 00:22:52.870 ] 00:22:52.870 }' 00:22:52.870 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.870 06:17:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.439 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:53.439 06:17:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.439 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:53.439 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:53.699 [2024-08-13 06:17:55.324720] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:53.699 BaseBdev1 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:53.699 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:53.959 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:53.959 [ 00:22:53.959 { 00:22:53.959 "name": "BaseBdev1", 00:22:53.959 "aliases": [ 00:22:53.959 "50b89512-0882-44e8-b05c-7f02adeb8102" 00:22:53.959 ], 00:22:53.959 "product_name": "Malloc disk", 00:22:53.959 "block_size": 512, 00:22:53.959 "num_blocks": 65536, 00:22:53.959 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:22:53.959 "assigned_rate_limits": { 00:22:53.959 "rw_ios_per_sec": 0, 00:22:53.959 "rw_mbytes_per_sec": 0, 00:22:53.959 "r_mbytes_per_sec": 0, 00:22:53.959 "w_mbytes_per_sec": 0 00:22:53.959 }, 00:22:53.959 "claimed": true, 00:22:53.959 "claim_type": "exclusive_write", 00:22:53.959 "zoned": false, 00:22:53.959 "supported_io_types": { 00:22:53.959 "read": true, 00:22:53.959 "write": true, 00:22:53.959 "unmap": true, 00:22:53.959 "flush": true, 00:22:53.959 "reset": true, 00:22:53.959 "nvme_admin": false, 00:22:53.959 "nvme_io": false, 00:22:53.959 "nvme_io_md": false, 00:22:53.959 "write_zeroes": true, 00:22:53.959 "zcopy": true, 00:22:53.959 "get_zone_info": false, 00:22:53.959 "zone_management": false, 00:22:53.959 "zone_append": false, 00:22:53.959 "compare": false, 00:22:53.959 "compare_and_write": false, 00:22:53.959 "abort": true, 00:22:53.959 "seek_hole": false, 00:22:53.959 "seek_data": false, 00:22:53.959 "copy": true, 00:22:53.959 "nvme_iov_md": false 00:22:53.959 }, 00:22:53.959 "memory_domains": [ 00:22:53.959 { 00:22:53.959 "dma_device_id": "system", 00:22:53.959 "dma_device_type": 1 00:22:53.959 }, 00:22:53.959 { 00:22:53.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.959 "dma_device_type": 2 00:22:53.959 } 00:22:53.959 ], 00:22:53.959 "driver_specific": {} 00:22:53.959 } 00:22:53.959 ] 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.219 "name": "Existed_Raid", 00:22:54.219 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:54.219 "strip_size_kb": 64, 00:22:54.219 "state": "configuring", 00:22:54.219 "raid_level": "raid5f", 00:22:54.219 "superblock": true, 00:22:54.219 "num_base_bdevs": 4, 00:22:54.219 "num_base_bdevs_discovered": 3, 00:22:54.219 "num_base_bdevs_operational": 4, 00:22:54.219 "base_bdevs_list": [ 00:22:54.219 { 00:22:54.219 "name": "BaseBdev1", 00:22:54.219 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:22:54.219 "is_configured": true, 00:22:54.219 "data_offset": 2048, 00:22:54.219 "data_size": 63488 00:22:54.219 }, 00:22:54.219 { 00:22:54.219 "name": null, 00:22:54.219 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:54.219 "is_configured": false, 00:22:54.219 "data_offset": 2048, 00:22:54.219 "data_size": 63488 00:22:54.219 }, 00:22:54.219 { 00:22:54.219 "name": "BaseBdev3", 00:22:54.219 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:54.219 "is_configured": true, 00:22:54.219 "data_offset": 2048, 00:22:54.219 "data_size": 63488 00:22:54.219 }, 00:22:54.219 { 00:22:54.219 "name": "BaseBdev4", 00:22:54.219 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:54.219 "is_configured": true, 00:22:54.219 "data_offset": 2048, 00:22:54.219 "data_size": 63488 00:22:54.219 } 00:22:54.219 ] 00:22:54.219 }' 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.219 06:17:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.788 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.788 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:55.048 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:55.048 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:55.307 [2024-08-13 06:17:56.906155] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.307 06:17:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.567 06:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.567 "name": "Existed_Raid", 00:22:55.567 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:55.567 "strip_size_kb": 64, 00:22:55.567 "state": "configuring", 00:22:55.567 "raid_level": "raid5f", 00:22:55.567 "superblock": true, 00:22:55.567 "num_base_bdevs": 4, 00:22:55.567 "num_base_bdevs_discovered": 2, 00:22:55.567 "num_base_bdevs_operational": 4, 00:22:55.567 "base_bdevs_list": [ 00:22:55.567 { 00:22:55.567 "name": "BaseBdev1", 00:22:55.567 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:22:55.567 "is_configured": true, 00:22:55.567 "data_offset": 2048, 00:22:55.567 "data_size": 63488 00:22:55.567 }, 00:22:55.567 { 00:22:55.567 "name": null, 00:22:55.567 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:55.567 "is_configured": false, 00:22:55.567 "data_offset": 2048, 00:22:55.567 "data_size": 63488 00:22:55.567 }, 00:22:55.567 { 00:22:55.567 "name": null, 00:22:55.567 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:55.567 "is_configured": false, 00:22:55.567 "data_offset": 2048, 00:22:55.567 "data_size": 63488 00:22:55.567 }, 00:22:55.567 { 00:22:55.567 "name": "BaseBdev4", 00:22:55.567 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:55.567 "is_configured": true, 00:22:55.567 "data_offset": 2048, 00:22:55.567 "data_size": 63488 00:22:55.567 } 00:22:55.567 ] 00:22:55.567 }' 00:22:55.567 06:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.567 06:17:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.136 06:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:56.136 06:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.136 06:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:56.136 06:17:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:56.395 [2024-08-13 06:17:58.040264] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.395 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.655 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.655 "name": "Existed_Raid", 00:22:56.655 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:56.655 "strip_size_kb": 64, 00:22:56.655 "state": "configuring", 00:22:56.655 "raid_level": "raid5f", 00:22:56.655 "superblock": true, 00:22:56.655 "num_base_bdevs": 4, 00:22:56.655 "num_base_bdevs_discovered": 3, 00:22:56.655 "num_base_bdevs_operational": 4, 00:22:56.655 "base_bdevs_list": [ 00:22:56.655 { 00:22:56.655 "name": "BaseBdev1", 00:22:56.655 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:22:56.655 "is_configured": true, 00:22:56.655 "data_offset": 2048, 00:22:56.655 "data_size": 63488 00:22:56.655 }, 00:22:56.655 { 00:22:56.655 "name": null, 00:22:56.655 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:56.655 "is_configured": false, 00:22:56.655 "data_offset": 2048, 00:22:56.655 "data_size": 63488 00:22:56.655 }, 00:22:56.655 { 00:22:56.655 "name": "BaseBdev3", 00:22:56.655 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:56.655 "is_configured": true, 00:22:56.655 "data_offset": 2048, 00:22:56.655 "data_size": 63488 00:22:56.655 }, 00:22:56.655 { 00:22:56.655 "name": "BaseBdev4", 00:22:56.655 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:56.655 "is_configured": true, 00:22:56.655 "data_offset": 2048, 00:22:56.655 "data_size": 63488 00:22:56.655 } 00:22:56.655 ] 00:22:56.655 }' 00:22:56.655 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.655 06:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.224 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:57.224 06:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.224 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:57.224 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:57.484 [2024-08-13 06:17:59.190480] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.484 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.744 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:57.744 "name": "Existed_Raid", 00:22:57.744 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:57.744 "strip_size_kb": 64, 00:22:57.744 "state": "configuring", 00:22:57.744 "raid_level": "raid5f", 00:22:57.744 "superblock": true, 00:22:57.744 "num_base_bdevs": 4, 00:22:57.744 "num_base_bdevs_discovered": 2, 00:22:57.744 "num_base_bdevs_operational": 4, 00:22:57.744 "base_bdevs_list": [ 00:22:57.744 { 00:22:57.744 "name": null, 00:22:57.744 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:22:57.744 "is_configured": false, 00:22:57.744 "data_offset": 2048, 00:22:57.744 "data_size": 63488 00:22:57.744 }, 00:22:57.744 { 00:22:57.744 "name": null, 00:22:57.744 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:57.744 "is_configured": false, 00:22:57.744 "data_offset": 2048, 00:22:57.744 "data_size": 63488 00:22:57.744 }, 00:22:57.744 { 00:22:57.744 "name": "BaseBdev3", 00:22:57.744 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:57.744 "is_configured": true, 00:22:57.744 "data_offset": 2048, 00:22:57.744 "data_size": 63488 00:22:57.744 }, 00:22:57.744 { 00:22:57.744 "name": "BaseBdev4", 00:22:57.744 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:57.744 "is_configured": true, 00:22:57.744 "data_offset": 2048, 00:22:57.744 "data_size": 63488 00:22:57.744 } 00:22:57.744 ] 00:22:57.744 }' 00:22:57.744 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:57.744 06:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.314 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:58.314 06:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:58.573 [2024-08-13 06:18:00.299032] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.573 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.835 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:58.835 "name": "Existed_Raid", 00:22:58.835 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:22:58.835 "strip_size_kb": 64, 00:22:58.835 "state": "configuring", 00:22:58.835 "raid_level": "raid5f", 00:22:58.835 "superblock": true, 00:22:58.835 "num_base_bdevs": 4, 00:22:58.835 "num_base_bdevs_discovered": 3, 00:22:58.835 "num_base_bdevs_operational": 4, 00:22:58.835 "base_bdevs_list": [ 00:22:58.835 { 00:22:58.835 "name": null, 00:22:58.835 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:22:58.835 "is_configured": false, 00:22:58.835 "data_offset": 2048, 00:22:58.835 "data_size": 63488 00:22:58.835 }, 00:22:58.835 { 00:22:58.835 "name": "BaseBdev2", 00:22:58.835 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:22:58.835 "is_configured": true, 00:22:58.835 "data_offset": 2048, 00:22:58.835 "data_size": 63488 00:22:58.835 }, 00:22:58.835 { 00:22:58.835 "name": "BaseBdev3", 00:22:58.835 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:22:58.835 "is_configured": true, 00:22:58.835 "data_offset": 2048, 00:22:58.835 "data_size": 63488 00:22:58.835 }, 00:22:58.835 { 00:22:58.835 "name": "BaseBdev4", 00:22:58.835 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:22:58.835 "is_configured": true, 00:22:58.835 "data_offset": 2048, 00:22:58.835 "data_size": 63488 00:22:58.835 } 00:22:58.835 ] 00:22:58.835 }' 00:22:58.835 06:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:58.835 06:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.411 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.411 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:59.670 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:59.670 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.670 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 50b89512-0882-44e8-b05c-7f02adeb8102 00:22:59.931 [2024-08-13 06:18:01.627694] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:59.931 [2024-08-13 06:18:01.627855] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:22:59.931 [2024-08-13 06:18:01.627870] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:59.931 [2024-08-13 06:18:01.628127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:22:59.931 [2024-08-13 06:18:01.628548] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:22:59.931 [2024-08-13 06:18:01.628569] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:22:59.931 [2024-08-13 06:18:01.628665] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.931 NewBaseBdev 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:59.931 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.190 06:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:00.450 [ 00:23:00.450 { 00:23:00.450 "name": "NewBaseBdev", 00:23:00.450 "aliases": [ 00:23:00.450 "50b89512-0882-44e8-b05c-7f02adeb8102" 00:23:00.450 ], 00:23:00.450 "product_name": "Malloc disk", 00:23:00.450 "block_size": 512, 00:23:00.450 "num_blocks": 65536, 00:23:00.450 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:23:00.450 "assigned_rate_limits": { 00:23:00.450 "rw_ios_per_sec": 0, 00:23:00.450 "rw_mbytes_per_sec": 0, 00:23:00.450 "r_mbytes_per_sec": 0, 00:23:00.450 "w_mbytes_per_sec": 0 00:23:00.450 }, 00:23:00.450 "claimed": true, 00:23:00.450 "claim_type": "exclusive_write", 00:23:00.450 "zoned": false, 00:23:00.450 "supported_io_types": { 00:23:00.450 "read": true, 00:23:00.450 "write": true, 00:23:00.450 "unmap": true, 00:23:00.450 "flush": true, 00:23:00.450 "reset": true, 00:23:00.450 "nvme_admin": false, 00:23:00.450 "nvme_io": false, 00:23:00.450 "nvme_io_md": false, 00:23:00.450 "write_zeroes": true, 00:23:00.450 "zcopy": true, 00:23:00.450 "get_zone_info": false, 00:23:00.450 "zone_management": false, 00:23:00.450 "zone_append": false, 00:23:00.450 "compare": false, 00:23:00.450 "compare_and_write": false, 00:23:00.450 "abort": true, 00:23:00.450 "seek_hole": false, 00:23:00.450 "seek_data": false, 00:23:00.450 "copy": true, 00:23:00.450 "nvme_iov_md": false 00:23:00.450 }, 00:23:00.450 "memory_domains": [ 00:23:00.450 { 00:23:00.450 "dma_device_id": "system", 00:23:00.450 "dma_device_type": 1 00:23:00.450 }, 00:23:00.450 { 00:23:00.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.450 "dma_device_type": 2 00:23:00.450 } 00:23:00.450 ], 00:23:00.450 "driver_specific": {} 00:23:00.450 } 00:23:00.450 ] 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.450 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.450 "name": "Existed_Raid", 00:23:00.450 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:23:00.450 "strip_size_kb": 64, 00:23:00.450 "state": "online", 00:23:00.450 "raid_level": "raid5f", 00:23:00.450 "superblock": true, 00:23:00.450 "num_base_bdevs": 4, 00:23:00.450 "num_base_bdevs_discovered": 4, 00:23:00.450 "num_base_bdevs_operational": 4, 00:23:00.450 "base_bdevs_list": [ 00:23:00.450 { 00:23:00.450 "name": "NewBaseBdev", 00:23:00.450 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:23:00.450 "is_configured": true, 00:23:00.450 "data_offset": 2048, 00:23:00.450 "data_size": 63488 00:23:00.450 }, 00:23:00.450 { 00:23:00.450 "name": "BaseBdev2", 00:23:00.450 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:23:00.450 "is_configured": true, 00:23:00.450 "data_offset": 2048, 00:23:00.450 "data_size": 63488 00:23:00.450 }, 00:23:00.450 { 00:23:00.450 "name": "BaseBdev3", 00:23:00.450 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:23:00.450 "is_configured": true, 00:23:00.450 "data_offset": 2048, 00:23:00.450 "data_size": 63488 00:23:00.451 }, 00:23:00.451 { 00:23:00.451 "name": "BaseBdev4", 00:23:00.451 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:23:00.451 "is_configured": true, 00:23:00.451 "data_offset": 2048, 00:23:00.451 "data_size": 63488 00:23:00.451 } 00:23:00.451 ] 00:23:00.451 }' 00:23:00.451 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.451 06:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:01.020 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:01.280 [2024-08-13 06:18:02.945693] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:01.280 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:01.280 "name": "Existed_Raid", 00:23:01.280 "aliases": [ 00:23:01.280 "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d" 00:23:01.280 ], 00:23:01.280 "product_name": "Raid Volume", 00:23:01.280 "block_size": 512, 00:23:01.280 "num_blocks": 190464, 00:23:01.280 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:23:01.280 "assigned_rate_limits": { 00:23:01.280 "rw_ios_per_sec": 0, 00:23:01.280 "rw_mbytes_per_sec": 0, 00:23:01.280 "r_mbytes_per_sec": 0, 00:23:01.280 "w_mbytes_per_sec": 0 00:23:01.280 }, 00:23:01.280 "claimed": false, 00:23:01.280 "zoned": false, 00:23:01.280 "supported_io_types": { 00:23:01.280 "read": true, 00:23:01.280 "write": true, 00:23:01.280 "unmap": false, 00:23:01.280 "flush": false, 00:23:01.280 "reset": true, 00:23:01.280 "nvme_admin": false, 00:23:01.280 "nvme_io": false, 00:23:01.280 "nvme_io_md": false, 00:23:01.280 "write_zeroes": true, 00:23:01.280 "zcopy": false, 00:23:01.280 "get_zone_info": false, 00:23:01.280 "zone_management": false, 00:23:01.280 "zone_append": false, 00:23:01.280 "compare": false, 00:23:01.280 "compare_and_write": false, 00:23:01.280 "abort": false, 00:23:01.280 "seek_hole": false, 00:23:01.280 "seek_data": false, 00:23:01.280 "copy": false, 00:23:01.280 "nvme_iov_md": false 00:23:01.280 }, 00:23:01.280 "driver_specific": { 00:23:01.280 "raid": { 00:23:01.280 "uuid": "2cd9beb1-d4ba-45d8-96ce-acd0dadc0a2d", 00:23:01.280 "strip_size_kb": 64, 00:23:01.280 "state": "online", 00:23:01.280 "raid_level": "raid5f", 00:23:01.280 "superblock": true, 00:23:01.280 "num_base_bdevs": 4, 00:23:01.280 "num_base_bdevs_discovered": 4, 00:23:01.280 "num_base_bdevs_operational": 4, 00:23:01.280 "base_bdevs_list": [ 00:23:01.280 { 00:23:01.280 "name": "NewBaseBdev", 00:23:01.280 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:23:01.280 "is_configured": true, 00:23:01.280 "data_offset": 2048, 00:23:01.280 "data_size": 63488 00:23:01.280 }, 00:23:01.280 { 00:23:01.280 "name": "BaseBdev2", 00:23:01.280 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:23:01.280 "is_configured": true, 00:23:01.280 "data_offset": 2048, 00:23:01.280 "data_size": 63488 00:23:01.280 }, 00:23:01.280 { 00:23:01.280 "name": "BaseBdev3", 00:23:01.280 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:23:01.280 "is_configured": true, 00:23:01.280 "data_offset": 2048, 00:23:01.280 "data_size": 63488 00:23:01.280 }, 00:23:01.280 { 00:23:01.280 "name": "BaseBdev4", 00:23:01.280 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:23:01.280 "is_configured": true, 00:23:01.280 "data_offset": 2048, 00:23:01.280 "data_size": 63488 00:23:01.280 } 00:23:01.280 ] 00:23:01.280 } 00:23:01.280 } 00:23:01.280 }' 00:23:01.280 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:01.280 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:01.280 BaseBdev2 00:23:01.280 BaseBdev3 00:23:01.280 BaseBdev4' 00:23:01.280 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:01.280 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:01.280 06:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:01.540 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:01.540 "name": "NewBaseBdev", 00:23:01.540 "aliases": [ 00:23:01.540 "50b89512-0882-44e8-b05c-7f02adeb8102" 00:23:01.540 ], 00:23:01.540 "product_name": "Malloc disk", 00:23:01.540 "block_size": 512, 00:23:01.540 "num_blocks": 65536, 00:23:01.540 "uuid": "50b89512-0882-44e8-b05c-7f02adeb8102", 00:23:01.540 "assigned_rate_limits": { 00:23:01.540 "rw_ios_per_sec": 0, 00:23:01.540 "rw_mbytes_per_sec": 0, 00:23:01.540 "r_mbytes_per_sec": 0, 00:23:01.540 "w_mbytes_per_sec": 0 00:23:01.540 }, 00:23:01.540 "claimed": true, 00:23:01.540 "claim_type": "exclusive_write", 00:23:01.540 "zoned": false, 00:23:01.540 "supported_io_types": { 00:23:01.540 "read": true, 00:23:01.540 "write": true, 00:23:01.540 "unmap": true, 00:23:01.540 "flush": true, 00:23:01.540 "reset": true, 00:23:01.540 "nvme_admin": false, 00:23:01.540 "nvme_io": false, 00:23:01.540 "nvme_io_md": false, 00:23:01.540 "write_zeroes": true, 00:23:01.540 "zcopy": true, 00:23:01.540 "get_zone_info": false, 00:23:01.540 "zone_management": false, 00:23:01.540 "zone_append": false, 00:23:01.540 "compare": false, 00:23:01.540 "compare_and_write": false, 00:23:01.540 "abort": true, 00:23:01.540 "seek_hole": false, 00:23:01.540 "seek_data": false, 00:23:01.540 "copy": true, 00:23:01.540 "nvme_iov_md": false 00:23:01.540 }, 00:23:01.540 "memory_domains": [ 00:23:01.540 { 00:23:01.540 "dma_device_id": "system", 00:23:01.540 "dma_device_type": 1 00:23:01.540 }, 00:23:01.540 { 00:23:01.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.540 "dma_device_type": 2 00:23:01.540 } 00:23:01.540 ], 00:23:01.540 "driver_specific": {} 00:23:01.540 }' 00:23:01.540 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.540 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.540 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:01.540 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.540 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:01.799 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:02.061 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.061 "name": "BaseBdev2", 00:23:02.061 "aliases": [ 00:23:02.061 "9aac321f-2c15-4915-871e-b351a6b8aef9" 00:23:02.061 ], 00:23:02.061 "product_name": "Malloc disk", 00:23:02.061 "block_size": 512, 00:23:02.061 "num_blocks": 65536, 00:23:02.061 "uuid": "9aac321f-2c15-4915-871e-b351a6b8aef9", 00:23:02.061 "assigned_rate_limits": { 00:23:02.061 "rw_ios_per_sec": 0, 00:23:02.061 "rw_mbytes_per_sec": 0, 00:23:02.061 "r_mbytes_per_sec": 0, 00:23:02.061 "w_mbytes_per_sec": 0 00:23:02.061 }, 00:23:02.061 "claimed": true, 00:23:02.061 "claim_type": "exclusive_write", 00:23:02.061 "zoned": false, 00:23:02.061 "supported_io_types": { 00:23:02.061 "read": true, 00:23:02.061 "write": true, 00:23:02.061 "unmap": true, 00:23:02.061 "flush": true, 00:23:02.061 "reset": true, 00:23:02.061 "nvme_admin": false, 00:23:02.061 "nvme_io": false, 00:23:02.061 "nvme_io_md": false, 00:23:02.061 "write_zeroes": true, 00:23:02.061 "zcopy": true, 00:23:02.061 "get_zone_info": false, 00:23:02.061 "zone_management": false, 00:23:02.061 "zone_append": false, 00:23:02.061 "compare": false, 00:23:02.061 "compare_and_write": false, 00:23:02.061 "abort": true, 00:23:02.061 "seek_hole": false, 00:23:02.061 "seek_data": false, 00:23:02.061 "copy": true, 00:23:02.061 "nvme_iov_md": false 00:23:02.061 }, 00:23:02.061 "memory_domains": [ 00:23:02.061 { 00:23:02.061 "dma_device_id": "system", 00:23:02.061 "dma_device_type": 1 00:23:02.061 }, 00:23:02.061 { 00:23:02.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.061 "dma_device_type": 2 00:23:02.061 } 00:23:02.061 ], 00:23:02.061 "driver_specific": {} 00:23:02.061 }' 00:23:02.061 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.061 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.061 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.061 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.320 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.320 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.320 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.320 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.320 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:02.320 06:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.320 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.320 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:02.320 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.320 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:02.320 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.579 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.579 "name": "BaseBdev3", 00:23:02.579 "aliases": [ 00:23:02.579 "6adac89d-ec7f-428e-814a-bc5416a44031" 00:23:02.579 ], 00:23:02.579 "product_name": "Malloc disk", 00:23:02.579 "block_size": 512, 00:23:02.579 "num_blocks": 65536, 00:23:02.579 "uuid": "6adac89d-ec7f-428e-814a-bc5416a44031", 00:23:02.579 "assigned_rate_limits": { 00:23:02.579 "rw_ios_per_sec": 0, 00:23:02.579 "rw_mbytes_per_sec": 0, 00:23:02.579 "r_mbytes_per_sec": 0, 00:23:02.579 "w_mbytes_per_sec": 0 00:23:02.579 }, 00:23:02.579 "claimed": true, 00:23:02.579 "claim_type": "exclusive_write", 00:23:02.579 "zoned": false, 00:23:02.579 "supported_io_types": { 00:23:02.579 "read": true, 00:23:02.579 "write": true, 00:23:02.579 "unmap": true, 00:23:02.579 "flush": true, 00:23:02.579 "reset": true, 00:23:02.579 "nvme_admin": false, 00:23:02.579 "nvme_io": false, 00:23:02.579 "nvme_io_md": false, 00:23:02.579 "write_zeroes": true, 00:23:02.579 "zcopy": true, 00:23:02.579 "get_zone_info": false, 00:23:02.579 "zone_management": false, 00:23:02.579 "zone_append": false, 00:23:02.579 "compare": false, 00:23:02.579 "compare_and_write": false, 00:23:02.579 "abort": true, 00:23:02.579 "seek_hole": false, 00:23:02.579 "seek_data": false, 00:23:02.579 "copy": true, 00:23:02.579 "nvme_iov_md": false 00:23:02.579 }, 00:23:02.579 "memory_domains": [ 00:23:02.579 { 00:23:02.579 "dma_device_id": "system", 00:23:02.579 "dma_device_type": 1 00:23:02.579 }, 00:23:02.579 { 00:23:02.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.579 "dma_device_type": 2 00:23:02.579 } 00:23:02.579 ], 00:23:02.579 "driver_specific": {} 00:23:02.579 }' 00:23:02.579 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.579 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.579 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.579 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.839 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.098 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.098 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.098 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:03.098 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.099 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.099 "name": "BaseBdev4", 00:23:03.099 "aliases": [ 00:23:03.099 "35452fda-fcf0-408f-868c-951f929f5067" 00:23:03.099 ], 00:23:03.099 "product_name": "Malloc disk", 00:23:03.099 "block_size": 512, 00:23:03.099 "num_blocks": 65536, 00:23:03.099 "uuid": "35452fda-fcf0-408f-868c-951f929f5067", 00:23:03.099 "assigned_rate_limits": { 00:23:03.099 "rw_ios_per_sec": 0, 00:23:03.099 "rw_mbytes_per_sec": 0, 00:23:03.099 "r_mbytes_per_sec": 0, 00:23:03.099 "w_mbytes_per_sec": 0 00:23:03.099 }, 00:23:03.099 "claimed": true, 00:23:03.099 "claim_type": "exclusive_write", 00:23:03.099 "zoned": false, 00:23:03.099 "supported_io_types": { 00:23:03.099 "read": true, 00:23:03.099 "write": true, 00:23:03.099 "unmap": true, 00:23:03.099 "flush": true, 00:23:03.099 "reset": true, 00:23:03.099 "nvme_admin": false, 00:23:03.099 "nvme_io": false, 00:23:03.099 "nvme_io_md": false, 00:23:03.099 "write_zeroes": true, 00:23:03.099 "zcopy": true, 00:23:03.099 "get_zone_info": false, 00:23:03.099 "zone_management": false, 00:23:03.099 "zone_append": false, 00:23:03.099 "compare": false, 00:23:03.099 "compare_and_write": false, 00:23:03.099 "abort": true, 00:23:03.099 "seek_hole": false, 00:23:03.099 "seek_data": false, 00:23:03.099 "copy": true, 00:23:03.099 "nvme_iov_md": false 00:23:03.099 }, 00:23:03.099 "memory_domains": [ 00:23:03.099 { 00:23:03.099 "dma_device_id": "system", 00:23:03.099 "dma_device_type": 1 00:23:03.099 }, 00:23:03.099 { 00:23:03.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.099 "dma_device_type": 2 00:23:03.099 } 00:23:03.099 ], 00:23:03.099 "driver_specific": {} 00:23:03.099 }' 00:23:03.099 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.358 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.358 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.358 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.358 06:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.358 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.358 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.358 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.358 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.358 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.358 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:03.618 [2024-08-13 06:18:05.381389] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:03.618 [2024-08-13 06:18:05.381416] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:03.618 [2024-08-13 06:18:05.381487] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.618 [2024-08-13 06:18:05.381720] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.618 [2024-08-13 06:18:05.381732] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 102002 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 102002 ']' 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 102002 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:23:03.618 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.878 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102002 00:23:03.878 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:03.878 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:03.878 killing process with pid 102002 00:23:03.878 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102002' 00:23:03.878 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 102002 00:23:03.878 [2024-08-13 06:18:05.430772] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:03.878 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 102002 00:23:03.878 [2024-08-13 06:18:05.471610] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:04.138 06:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:04.138 00:23:04.138 real 0m27.611s 00:23:04.138 user 0m51.071s 00:23:04.138 sys 0m4.623s 00:23:04.138 ************************************ 00:23:04.138 END TEST raid5f_state_function_test_sb 00:23:04.138 ************************************ 00:23:04.138 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:04.138 06:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.138 06:18:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:04.138 06:18:05 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:04.138 06:18:05 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:04.138 06:18:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:04.138 ************************************ 00:23:04.138 START TEST raid5f_superblock_test 00:23:04.138 ************************************ 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 4 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=103000 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 103000 /var/tmp/spdk-raid.sock 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 103000 ']' 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:04.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.138 06:18:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.138 [2024-08-13 06:18:05.895572] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:23:04.138 [2024-08-13 06:18:05.895698] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103000 ] 00:23:04.398 [2024-08-13 06:18:06.039900] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.398 [2024-08-13 06:18:06.085534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.398 [2024-08-13 06:18:06.128064] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.398 [2024-08-13 06:18:06.128100] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:04.968 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:05.228 malloc1 00:23:05.228 06:18:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:05.487 [2024-08-13 06:18:07.056333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:05.488 [2024-08-13 06:18:07.056449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.488 [2024-08-13 06:18:07.056492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:05.488 [2024-08-13 06:18:07.056521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.488 [2024-08-13 06:18:07.058528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.488 [2024-08-13 06:18:07.058603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:05.488 pt1 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:05.488 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:05.488 malloc2 00:23:05.747 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:05.748 [2024-08-13 06:18:07.464399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:05.748 [2024-08-13 06:18:07.464460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.748 [2024-08-13 06:18:07.464478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:05.748 [2024-08-13 06:18:07.464486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.748 [2024-08-13 06:18:07.466401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.748 [2024-08-13 06:18:07.466499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:05.748 pt2 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:05.748 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:06.007 malloc3 00:23:06.007 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:06.267 [2024-08-13 06:18:07.876258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:06.267 [2024-08-13 06:18:07.876365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.267 [2024-08-13 06:18:07.876401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:06.267 [2024-08-13 06:18:07.876428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.267 [2024-08-13 06:18:07.878314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.267 [2024-08-13 06:18:07.878383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:06.267 pt3 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:06.267 06:18:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:06.527 malloc4 00:23:06.527 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:06.527 [2024-08-13 06:18:08.292106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:06.527 [2024-08-13 06:18:08.292158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.527 [2024-08-13 06:18:08.292175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:06.527 [2024-08-13 06:18:08.292183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.527 [2024-08-13 06:18:08.294061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.527 [2024-08-13 06:18:08.294095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:06.527 pt4 00:23:06.527 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:06.527 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:06.527 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:06.787 [2024-08-13 06:18:08.487816] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:06.787 [2024-08-13 06:18:08.489563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:06.787 [2024-08-13 06:18:08.489633] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:06.787 [2024-08-13 06:18:08.489670] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:06.787 [2024-08-13 06:18:08.489842] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:23:06.787 [2024-08-13 06:18:08.489859] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:06.787 [2024-08-13 06:18:08.490129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:23:06.787 [2024-08-13 06:18:08.490610] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:23:06.787 [2024-08-13 06:18:08.490633] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:23:06.787 [2024-08-13 06:18:08.490768] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.787 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:06.787 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:06.787 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:06.787 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:06.787 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.788 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.047 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.047 "name": "raid_bdev1", 00:23:07.047 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:07.047 "strip_size_kb": 64, 00:23:07.047 "state": "online", 00:23:07.047 "raid_level": "raid5f", 00:23:07.047 "superblock": true, 00:23:07.047 "num_base_bdevs": 4, 00:23:07.047 "num_base_bdevs_discovered": 4, 00:23:07.047 "num_base_bdevs_operational": 4, 00:23:07.047 "base_bdevs_list": [ 00:23:07.047 { 00:23:07.047 "name": "pt1", 00:23:07.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:07.047 "is_configured": true, 00:23:07.047 "data_offset": 2048, 00:23:07.047 "data_size": 63488 00:23:07.047 }, 00:23:07.047 { 00:23:07.047 "name": "pt2", 00:23:07.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.047 "is_configured": true, 00:23:07.047 "data_offset": 2048, 00:23:07.047 "data_size": 63488 00:23:07.047 }, 00:23:07.047 { 00:23:07.047 "name": "pt3", 00:23:07.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:07.047 "is_configured": true, 00:23:07.047 "data_offset": 2048, 00:23:07.047 "data_size": 63488 00:23:07.047 }, 00:23:07.047 { 00:23:07.047 "name": "pt4", 00:23:07.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:07.047 "is_configured": true, 00:23:07.047 "data_offset": 2048, 00:23:07.047 "data_size": 63488 00:23:07.047 } 00:23:07.047 ] 00:23:07.047 }' 00:23:07.047 06:18:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.047 06:18:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:07.616 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:07.616 [2024-08-13 06:18:09.407094] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.876 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:07.876 "name": "raid_bdev1", 00:23:07.876 "aliases": [ 00:23:07.876 "6523f3ba-148d-470c-a868-cc1c083bdb3b" 00:23:07.876 ], 00:23:07.876 "product_name": "Raid Volume", 00:23:07.876 "block_size": 512, 00:23:07.876 "num_blocks": 190464, 00:23:07.876 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:07.876 "assigned_rate_limits": { 00:23:07.876 "rw_ios_per_sec": 0, 00:23:07.876 "rw_mbytes_per_sec": 0, 00:23:07.876 "r_mbytes_per_sec": 0, 00:23:07.876 "w_mbytes_per_sec": 0 00:23:07.876 }, 00:23:07.876 "claimed": false, 00:23:07.876 "zoned": false, 00:23:07.876 "supported_io_types": { 00:23:07.876 "read": true, 00:23:07.876 "write": true, 00:23:07.876 "unmap": false, 00:23:07.876 "flush": false, 00:23:07.876 "reset": true, 00:23:07.876 "nvme_admin": false, 00:23:07.876 "nvme_io": false, 00:23:07.876 "nvme_io_md": false, 00:23:07.876 "write_zeroes": true, 00:23:07.876 "zcopy": false, 00:23:07.876 "get_zone_info": false, 00:23:07.876 "zone_management": false, 00:23:07.876 "zone_append": false, 00:23:07.876 "compare": false, 00:23:07.876 "compare_and_write": false, 00:23:07.876 "abort": false, 00:23:07.876 "seek_hole": false, 00:23:07.876 "seek_data": false, 00:23:07.876 "copy": false, 00:23:07.876 "nvme_iov_md": false 00:23:07.876 }, 00:23:07.876 "driver_specific": { 00:23:07.876 "raid": { 00:23:07.876 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:07.876 "strip_size_kb": 64, 00:23:07.876 "state": "online", 00:23:07.876 "raid_level": "raid5f", 00:23:07.876 "superblock": true, 00:23:07.876 "num_base_bdevs": 4, 00:23:07.876 "num_base_bdevs_discovered": 4, 00:23:07.876 "num_base_bdevs_operational": 4, 00:23:07.876 "base_bdevs_list": [ 00:23:07.876 { 00:23:07.876 "name": "pt1", 00:23:07.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:07.876 "is_configured": true, 00:23:07.876 "data_offset": 2048, 00:23:07.876 "data_size": 63488 00:23:07.876 }, 00:23:07.876 { 00:23:07.876 "name": "pt2", 00:23:07.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.876 "is_configured": true, 00:23:07.876 "data_offset": 2048, 00:23:07.876 "data_size": 63488 00:23:07.876 }, 00:23:07.876 { 00:23:07.876 "name": "pt3", 00:23:07.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:07.876 "is_configured": true, 00:23:07.876 "data_offset": 2048, 00:23:07.876 "data_size": 63488 00:23:07.876 }, 00:23:07.876 { 00:23:07.876 "name": "pt4", 00:23:07.876 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:07.876 "is_configured": true, 00:23:07.876 "data_offset": 2048, 00:23:07.876 "data_size": 63488 00:23:07.876 } 00:23:07.876 ] 00:23:07.876 } 00:23:07.876 } 00:23:07.876 }' 00:23:07.876 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:07.876 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:07.876 pt2 00:23:07.876 pt3 00:23:07.876 pt4' 00:23:07.876 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:07.876 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:07.876 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.136 "name": "pt1", 00:23:08.136 "aliases": [ 00:23:08.136 "00000000-0000-0000-0000-000000000001" 00:23:08.136 ], 00:23:08.136 "product_name": "passthru", 00:23:08.136 "block_size": 512, 00:23:08.136 "num_blocks": 65536, 00:23:08.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.136 "assigned_rate_limits": { 00:23:08.136 "rw_ios_per_sec": 0, 00:23:08.136 "rw_mbytes_per_sec": 0, 00:23:08.136 "r_mbytes_per_sec": 0, 00:23:08.136 "w_mbytes_per_sec": 0 00:23:08.136 }, 00:23:08.136 "claimed": true, 00:23:08.136 "claim_type": "exclusive_write", 00:23:08.136 "zoned": false, 00:23:08.136 "supported_io_types": { 00:23:08.136 "read": true, 00:23:08.136 "write": true, 00:23:08.136 "unmap": true, 00:23:08.136 "flush": true, 00:23:08.136 "reset": true, 00:23:08.136 "nvme_admin": false, 00:23:08.136 "nvme_io": false, 00:23:08.136 "nvme_io_md": false, 00:23:08.136 "write_zeroes": true, 00:23:08.136 "zcopy": true, 00:23:08.136 "get_zone_info": false, 00:23:08.136 "zone_management": false, 00:23:08.136 "zone_append": false, 00:23:08.136 "compare": false, 00:23:08.136 "compare_and_write": false, 00:23:08.136 "abort": true, 00:23:08.136 "seek_hole": false, 00:23:08.136 "seek_data": false, 00:23:08.136 "copy": true, 00:23:08.136 "nvme_iov_md": false 00:23:08.136 }, 00:23:08.136 "memory_domains": [ 00:23:08.136 { 00:23:08.136 "dma_device_id": "system", 00:23:08.136 "dma_device_type": 1 00:23:08.136 }, 00:23:08.136 { 00:23:08.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.136 "dma_device_type": 2 00:23:08.136 } 00:23:08.136 ], 00:23:08.136 "driver_specific": { 00:23:08.136 "passthru": { 00:23:08.136 "name": "pt1", 00:23:08.136 "base_bdev_name": "malloc1" 00:23:08.136 } 00:23:08.136 } 00:23:08.136 }' 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:08.136 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.395 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.395 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:08.395 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:08.395 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:08.395 06:18:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.654 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.654 "name": "pt2", 00:23:08.654 "aliases": [ 00:23:08.654 "00000000-0000-0000-0000-000000000002" 00:23:08.654 ], 00:23:08.654 "product_name": "passthru", 00:23:08.654 "block_size": 512, 00:23:08.654 "num_blocks": 65536, 00:23:08.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.654 "assigned_rate_limits": { 00:23:08.654 "rw_ios_per_sec": 0, 00:23:08.654 "rw_mbytes_per_sec": 0, 00:23:08.654 "r_mbytes_per_sec": 0, 00:23:08.654 "w_mbytes_per_sec": 0 00:23:08.654 }, 00:23:08.654 "claimed": true, 00:23:08.654 "claim_type": "exclusive_write", 00:23:08.654 "zoned": false, 00:23:08.654 "supported_io_types": { 00:23:08.654 "read": true, 00:23:08.655 "write": true, 00:23:08.655 "unmap": true, 00:23:08.655 "flush": true, 00:23:08.655 "reset": true, 00:23:08.655 "nvme_admin": false, 00:23:08.655 "nvme_io": false, 00:23:08.655 "nvme_io_md": false, 00:23:08.655 "write_zeroes": true, 00:23:08.655 "zcopy": true, 00:23:08.655 "get_zone_info": false, 00:23:08.655 "zone_management": false, 00:23:08.655 "zone_append": false, 00:23:08.655 "compare": false, 00:23:08.655 "compare_and_write": false, 00:23:08.655 "abort": true, 00:23:08.655 "seek_hole": false, 00:23:08.655 "seek_data": false, 00:23:08.655 "copy": true, 00:23:08.655 "nvme_iov_md": false 00:23:08.655 }, 00:23:08.655 "memory_domains": [ 00:23:08.655 { 00:23:08.655 "dma_device_id": "system", 00:23:08.655 "dma_device_type": 1 00:23:08.655 }, 00:23:08.655 { 00:23:08.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.655 "dma_device_type": 2 00:23:08.655 } 00:23:08.655 ], 00:23:08.655 "driver_specific": { 00:23:08.655 "passthru": { 00:23:08.655 "name": "pt2", 00:23:08.655 "base_bdev_name": "malloc2" 00:23:08.655 } 00:23:08.655 } 00:23:08.655 }' 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:08.655 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.915 "name": "pt3", 00:23:08.915 "aliases": [ 00:23:08.915 "00000000-0000-0000-0000-000000000003" 00:23:08.915 ], 00:23:08.915 "product_name": "passthru", 00:23:08.915 "block_size": 512, 00:23:08.915 "num_blocks": 65536, 00:23:08.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:08.915 "assigned_rate_limits": { 00:23:08.915 "rw_ios_per_sec": 0, 00:23:08.915 "rw_mbytes_per_sec": 0, 00:23:08.915 "r_mbytes_per_sec": 0, 00:23:08.915 "w_mbytes_per_sec": 0 00:23:08.915 }, 00:23:08.915 "claimed": true, 00:23:08.915 "claim_type": "exclusive_write", 00:23:08.915 "zoned": false, 00:23:08.915 "supported_io_types": { 00:23:08.915 "read": true, 00:23:08.915 "write": true, 00:23:08.915 "unmap": true, 00:23:08.915 "flush": true, 00:23:08.915 "reset": true, 00:23:08.915 "nvme_admin": false, 00:23:08.915 "nvme_io": false, 00:23:08.915 "nvme_io_md": false, 00:23:08.915 "write_zeroes": true, 00:23:08.915 "zcopy": true, 00:23:08.915 "get_zone_info": false, 00:23:08.915 "zone_management": false, 00:23:08.915 "zone_append": false, 00:23:08.915 "compare": false, 00:23:08.915 "compare_and_write": false, 00:23:08.915 "abort": true, 00:23:08.915 "seek_hole": false, 00:23:08.915 "seek_data": false, 00:23:08.915 "copy": true, 00:23:08.915 "nvme_iov_md": false 00:23:08.915 }, 00:23:08.915 "memory_domains": [ 00:23:08.915 { 00:23:08.915 "dma_device_id": "system", 00:23:08.915 "dma_device_type": 1 00:23:08.915 }, 00:23:08.915 { 00:23:08.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.915 "dma_device_type": 2 00:23:08.915 } 00:23:08.915 ], 00:23:08.915 "driver_specific": { 00:23:08.915 "passthru": { 00:23:08.915 "name": "pt3", 00:23:08.915 "base_bdev_name": "malloc3" 00:23:08.915 } 00:23:08.915 } 00:23:08.915 }' 00:23:08.915 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.175 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.434 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.434 06:18:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.434 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.434 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.434 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.434 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:09.434 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:09.694 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:09.694 "name": "pt4", 00:23:09.694 "aliases": [ 00:23:09.694 "00000000-0000-0000-0000-000000000004" 00:23:09.694 ], 00:23:09.694 "product_name": "passthru", 00:23:09.694 "block_size": 512, 00:23:09.694 "num_blocks": 65536, 00:23:09.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:09.694 "assigned_rate_limits": { 00:23:09.694 "rw_ios_per_sec": 0, 00:23:09.694 "rw_mbytes_per_sec": 0, 00:23:09.694 "r_mbytes_per_sec": 0, 00:23:09.694 "w_mbytes_per_sec": 0 00:23:09.694 }, 00:23:09.694 "claimed": true, 00:23:09.694 "claim_type": "exclusive_write", 00:23:09.694 "zoned": false, 00:23:09.694 "supported_io_types": { 00:23:09.694 "read": true, 00:23:09.694 "write": true, 00:23:09.694 "unmap": true, 00:23:09.694 "flush": true, 00:23:09.694 "reset": true, 00:23:09.694 "nvme_admin": false, 00:23:09.694 "nvme_io": false, 00:23:09.694 "nvme_io_md": false, 00:23:09.694 "write_zeroes": true, 00:23:09.694 "zcopy": true, 00:23:09.694 "get_zone_info": false, 00:23:09.694 "zone_management": false, 00:23:09.694 "zone_append": false, 00:23:09.694 "compare": false, 00:23:09.694 "compare_and_write": false, 00:23:09.694 "abort": true, 00:23:09.694 "seek_hole": false, 00:23:09.694 "seek_data": false, 00:23:09.694 "copy": true, 00:23:09.694 "nvme_iov_md": false 00:23:09.694 }, 00:23:09.694 "memory_domains": [ 00:23:09.694 { 00:23:09.694 "dma_device_id": "system", 00:23:09.694 "dma_device_type": 1 00:23:09.694 }, 00:23:09.694 { 00:23:09.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.694 "dma_device_type": 2 00:23:09.694 } 00:23:09.694 ], 00:23:09.694 "driver_specific": { 00:23:09.694 "passthru": { 00:23:09.694 "name": "pt4", 00:23:09.694 "base_bdev_name": "malloc4" 00:23:09.694 } 00:23:09.694 } 00:23:09.694 }' 00:23:09.694 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.694 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.695 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.695 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.695 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.695 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:09.695 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:09.954 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:23:10.213 [2024-08-13 06:18:11.759379] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.213 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=6523f3ba-148d-470c-a868-cc1c083bdb3b 00:23:10.213 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 6523f3ba-148d-470c-a868-cc1c083bdb3b ']' 00:23:10.213 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:10.213 [2024-08-13 06:18:11.950889] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:10.213 [2024-08-13 06:18:11.950915] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.213 [2024-08-13 06:18:11.951000] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.213 [2024-08-13 06:18:11.951093] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.213 [2024-08-13 06:18:11.951114] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:23:10.213 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:23:10.213 06:18:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.472 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:23:10.473 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:23:10.473 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:10.473 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:10.732 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:10.732 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:10.732 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:10.732 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:10.991 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:10.991 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:11.251 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:11.251 06:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@646 -- # local es=0 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:11.510 [2024-08-13 06:18:13.280678] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:11.510 [2024-08-13 06:18:13.282394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:11.510 [2024-08-13 06:18:13.282441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:11.510 [2024-08-13 06:18:13.282471] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:11.510 [2024-08-13 06:18:13.282514] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:11.510 [2024-08-13 06:18:13.282563] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:11.510 [2024-08-13 06:18:13.282580] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:11.510 [2024-08-13 06:18:13.282597] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:11.510 [2024-08-13 06:18:13.282610] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.510 [2024-08-13 06:18:13.282621] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:23:11.510 request: 00:23:11.510 { 00:23:11.510 "name": "raid_bdev1", 00:23:11.510 "raid_level": "raid5f", 00:23:11.510 "base_bdevs": [ 00:23:11.510 "malloc1", 00:23:11.510 "malloc2", 00:23:11.510 "malloc3", 00:23:11.510 "malloc4" 00:23:11.510 ], 00:23:11.510 "strip_size_kb": 64, 00:23:11.510 "superblock": false, 00:23:11.510 "method": "bdev_raid_create", 00:23:11.510 "req_id": 1 00:23:11.510 } 00:23:11.510 Got JSON-RPC error response 00:23:11.510 response: 00:23:11.510 { 00:23:11.510 "code": -17, 00:23:11.510 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:11.510 } 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # es=1 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:23:11.510 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:23:11.769 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.769 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:23:11.769 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:23:11.769 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:23:11.769 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:12.030 [2024-08-13 06:18:13.687925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:12.030 [2024-08-13 06:18:13.687989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.030 [2024-08-13 06:18:13.688005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:12.030 [2024-08-13 06:18:13.688018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.030 [2024-08-13 06:18:13.689961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.030 [2024-08-13 06:18:13.690000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:12.030 [2024-08-13 06:18:13.690088] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:12.030 [2024-08-13 06:18:13.690135] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:12.030 pt1 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.030 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.290 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.290 "name": "raid_bdev1", 00:23:12.290 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:12.290 "strip_size_kb": 64, 00:23:12.290 "state": "configuring", 00:23:12.291 "raid_level": "raid5f", 00:23:12.291 "superblock": true, 00:23:12.291 "num_base_bdevs": 4, 00:23:12.291 "num_base_bdevs_discovered": 1, 00:23:12.291 "num_base_bdevs_operational": 4, 00:23:12.291 "base_bdevs_list": [ 00:23:12.291 { 00:23:12.291 "name": "pt1", 00:23:12.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:12.291 "is_configured": true, 00:23:12.291 "data_offset": 2048, 00:23:12.291 "data_size": 63488 00:23:12.291 }, 00:23:12.291 { 00:23:12.291 "name": null, 00:23:12.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.291 "is_configured": false, 00:23:12.291 "data_offset": 2048, 00:23:12.291 "data_size": 63488 00:23:12.291 }, 00:23:12.291 { 00:23:12.291 "name": null, 00:23:12.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:12.291 "is_configured": false, 00:23:12.291 "data_offset": 2048, 00:23:12.291 "data_size": 63488 00:23:12.291 }, 00:23:12.291 { 00:23:12.291 "name": null, 00:23:12.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:12.291 "is_configured": false, 00:23:12.291 "data_offset": 2048, 00:23:12.291 "data_size": 63488 00:23:12.291 } 00:23:12.291 ] 00:23:12.291 }' 00:23:12.291 06:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.291 06:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.860 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:23:12.860 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:12.860 [2024-08-13 06:18:14.606341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:12.860 [2024-08-13 06:18:14.606394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.860 [2024-08-13 06:18:14.606411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:12.860 [2024-08-13 06:18:14.606421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.860 [2024-08-13 06:18:14.606758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.860 [2024-08-13 06:18:14.606786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:12.860 [2024-08-13 06:18:14.606850] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:12.860 [2024-08-13 06:18:14.606873] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:12.860 pt2 00:23:12.860 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:13.120 [2024-08-13 06:18:14.782180] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.120 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.379 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.379 "name": "raid_bdev1", 00:23:13.379 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:13.379 "strip_size_kb": 64, 00:23:13.379 "state": "configuring", 00:23:13.379 "raid_level": "raid5f", 00:23:13.379 "superblock": true, 00:23:13.379 "num_base_bdevs": 4, 00:23:13.379 "num_base_bdevs_discovered": 1, 00:23:13.379 "num_base_bdevs_operational": 4, 00:23:13.379 "base_bdevs_list": [ 00:23:13.379 { 00:23:13.379 "name": "pt1", 00:23:13.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:13.379 "is_configured": true, 00:23:13.379 "data_offset": 2048, 00:23:13.379 "data_size": 63488 00:23:13.379 }, 00:23:13.379 { 00:23:13.379 "name": null, 00:23:13.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.379 "is_configured": false, 00:23:13.379 "data_offset": 2048, 00:23:13.379 "data_size": 63488 00:23:13.379 }, 00:23:13.379 { 00:23:13.379 "name": null, 00:23:13.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:13.379 "is_configured": false, 00:23:13.379 "data_offset": 2048, 00:23:13.379 "data_size": 63488 00:23:13.379 }, 00:23:13.379 { 00:23:13.379 "name": null, 00:23:13.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:13.379 "is_configured": false, 00:23:13.379 "data_offset": 2048, 00:23:13.379 "data_size": 63488 00:23:13.379 } 00:23:13.379 ] 00:23:13.379 }' 00:23:13.379 06:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.379 06:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.948 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:23:13.948 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:13.948 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:13.948 [2024-08-13 06:18:15.684720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:13.948 [2024-08-13 06:18:15.684781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.948 [2024-08-13 06:18:15.684803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:13.948 [2024-08-13 06:18:15.684811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.948 [2024-08-13 06:18:15.685182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.948 [2024-08-13 06:18:15.685207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:13.948 [2024-08-13 06:18:15.685281] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:13.948 [2024-08-13 06:18:15.685301] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:13.948 pt2 00:23:13.948 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:13.948 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:13.948 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:14.209 [2024-08-13 06:18:15.888339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:14.209 [2024-08-13 06:18:15.888396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.209 [2024-08-13 06:18:15.888418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:14.209 [2024-08-13 06:18:15.888435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.209 [2024-08-13 06:18:15.888831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.209 [2024-08-13 06:18:15.888855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:14.209 [2024-08-13 06:18:15.888927] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:14.209 [2024-08-13 06:18:15.888947] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:14.209 pt3 00:23:14.209 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:14.209 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:14.209 06:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:14.469 [2024-08-13 06:18:16.084060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:14.469 [2024-08-13 06:18:16.084115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.469 [2024-08-13 06:18:16.084137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:14.469 [2024-08-13 06:18:16.084145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.469 [2024-08-13 06:18:16.084503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.469 [2024-08-13 06:18:16.084527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:14.469 [2024-08-13 06:18:16.084599] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:14.469 [2024-08-13 06:18:16.084618] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:14.469 [2024-08-13 06:18:16.084751] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:23:14.469 [2024-08-13 06:18:16.084769] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:14.469 [2024-08-13 06:18:16.084987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:14.469 [2024-08-13 06:18:16.085413] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:23:14.469 [2024-08-13 06:18:16.085435] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:23:14.469 [2024-08-13 06:18:16.085530] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.469 pt4 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.469 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.729 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.729 "name": "raid_bdev1", 00:23:14.729 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:14.729 "strip_size_kb": 64, 00:23:14.729 "state": "online", 00:23:14.729 "raid_level": "raid5f", 00:23:14.729 "superblock": true, 00:23:14.729 "num_base_bdevs": 4, 00:23:14.729 "num_base_bdevs_discovered": 4, 00:23:14.729 "num_base_bdevs_operational": 4, 00:23:14.729 "base_bdevs_list": [ 00:23:14.729 { 00:23:14.729 "name": "pt1", 00:23:14.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:14.729 "is_configured": true, 00:23:14.729 "data_offset": 2048, 00:23:14.729 "data_size": 63488 00:23:14.729 }, 00:23:14.729 { 00:23:14.729 "name": "pt2", 00:23:14.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.729 "is_configured": true, 00:23:14.729 "data_offset": 2048, 00:23:14.729 "data_size": 63488 00:23:14.729 }, 00:23:14.729 { 00:23:14.729 "name": "pt3", 00:23:14.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:14.729 "is_configured": true, 00:23:14.729 "data_offset": 2048, 00:23:14.729 "data_size": 63488 00:23:14.729 }, 00:23:14.729 { 00:23:14.729 "name": "pt4", 00:23:14.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:14.729 "is_configured": true, 00:23:14.729 "data_offset": 2048, 00:23:14.729 "data_size": 63488 00:23:14.729 } 00:23:14.729 ] 00:23:14.729 }' 00:23:14.729 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.729 06:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:15.298 06:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:15.298 [2024-08-13 06:18:17.046575] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.298 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:15.298 "name": "raid_bdev1", 00:23:15.298 "aliases": [ 00:23:15.298 "6523f3ba-148d-470c-a868-cc1c083bdb3b" 00:23:15.298 ], 00:23:15.298 "product_name": "Raid Volume", 00:23:15.298 "block_size": 512, 00:23:15.298 "num_blocks": 190464, 00:23:15.298 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:15.298 "assigned_rate_limits": { 00:23:15.298 "rw_ios_per_sec": 0, 00:23:15.298 "rw_mbytes_per_sec": 0, 00:23:15.298 "r_mbytes_per_sec": 0, 00:23:15.298 "w_mbytes_per_sec": 0 00:23:15.298 }, 00:23:15.298 "claimed": false, 00:23:15.298 "zoned": false, 00:23:15.298 "supported_io_types": { 00:23:15.298 "read": true, 00:23:15.298 "write": true, 00:23:15.298 "unmap": false, 00:23:15.298 "flush": false, 00:23:15.298 "reset": true, 00:23:15.298 "nvme_admin": false, 00:23:15.298 "nvme_io": false, 00:23:15.298 "nvme_io_md": false, 00:23:15.298 "write_zeroes": true, 00:23:15.298 "zcopy": false, 00:23:15.298 "get_zone_info": false, 00:23:15.298 "zone_management": false, 00:23:15.298 "zone_append": false, 00:23:15.298 "compare": false, 00:23:15.298 "compare_and_write": false, 00:23:15.298 "abort": false, 00:23:15.298 "seek_hole": false, 00:23:15.298 "seek_data": false, 00:23:15.298 "copy": false, 00:23:15.298 "nvme_iov_md": false 00:23:15.298 }, 00:23:15.298 "driver_specific": { 00:23:15.298 "raid": { 00:23:15.298 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:15.298 "strip_size_kb": 64, 00:23:15.298 "state": "online", 00:23:15.298 "raid_level": "raid5f", 00:23:15.298 "superblock": true, 00:23:15.298 "num_base_bdevs": 4, 00:23:15.298 "num_base_bdevs_discovered": 4, 00:23:15.298 "num_base_bdevs_operational": 4, 00:23:15.298 "base_bdevs_list": [ 00:23:15.298 { 00:23:15.298 "name": "pt1", 00:23:15.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.298 "is_configured": true, 00:23:15.298 "data_offset": 2048, 00:23:15.298 "data_size": 63488 00:23:15.298 }, 00:23:15.298 { 00:23:15.298 "name": "pt2", 00:23:15.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.298 "is_configured": true, 00:23:15.298 "data_offset": 2048, 00:23:15.298 "data_size": 63488 00:23:15.298 }, 00:23:15.298 { 00:23:15.298 "name": "pt3", 00:23:15.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:15.298 "is_configured": true, 00:23:15.298 "data_offset": 2048, 00:23:15.298 "data_size": 63488 00:23:15.298 }, 00:23:15.298 { 00:23:15.298 "name": "pt4", 00:23:15.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:15.298 "is_configured": true, 00:23:15.298 "data_offset": 2048, 00:23:15.298 "data_size": 63488 00:23:15.298 } 00:23:15.298 ] 00:23:15.298 } 00:23:15.298 } 00:23:15.298 }' 00:23:15.298 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:15.562 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:15.562 pt2 00:23:15.562 pt3 00:23:15.562 pt4' 00:23:15.562 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.562 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:15.562 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.562 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.562 "name": "pt1", 00:23:15.562 "aliases": [ 00:23:15.562 "00000000-0000-0000-0000-000000000001" 00:23:15.562 ], 00:23:15.562 "product_name": "passthru", 00:23:15.562 "block_size": 512, 00:23:15.562 "num_blocks": 65536, 00:23:15.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.562 "assigned_rate_limits": { 00:23:15.562 "rw_ios_per_sec": 0, 00:23:15.562 "rw_mbytes_per_sec": 0, 00:23:15.562 "r_mbytes_per_sec": 0, 00:23:15.563 "w_mbytes_per_sec": 0 00:23:15.563 }, 00:23:15.563 "claimed": true, 00:23:15.563 "claim_type": "exclusive_write", 00:23:15.563 "zoned": false, 00:23:15.563 "supported_io_types": { 00:23:15.563 "read": true, 00:23:15.563 "write": true, 00:23:15.563 "unmap": true, 00:23:15.563 "flush": true, 00:23:15.563 "reset": true, 00:23:15.563 "nvme_admin": false, 00:23:15.563 "nvme_io": false, 00:23:15.563 "nvme_io_md": false, 00:23:15.563 "write_zeroes": true, 00:23:15.563 "zcopy": true, 00:23:15.563 "get_zone_info": false, 00:23:15.563 "zone_management": false, 00:23:15.563 "zone_append": false, 00:23:15.563 "compare": false, 00:23:15.563 "compare_and_write": false, 00:23:15.563 "abort": true, 00:23:15.563 "seek_hole": false, 00:23:15.563 "seek_data": false, 00:23:15.563 "copy": true, 00:23:15.563 "nvme_iov_md": false 00:23:15.563 }, 00:23:15.563 "memory_domains": [ 00:23:15.563 { 00:23:15.563 "dma_device_id": "system", 00:23:15.563 "dma_device_type": 1 00:23:15.563 }, 00:23:15.563 { 00:23:15.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.563 "dma_device_type": 2 00:23:15.563 } 00:23:15.563 ], 00:23:15.563 "driver_specific": { 00:23:15.563 "passthru": { 00:23:15.563 "name": "pt1", 00:23:15.563 "base_bdev_name": "malloc1" 00:23:15.563 } 00:23:15.563 } 00:23:15.563 }' 00:23:15.563 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.563 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.826 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.827 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.086 "name": "pt2", 00:23:16.086 "aliases": [ 00:23:16.086 "00000000-0000-0000-0000-000000000002" 00:23:16.086 ], 00:23:16.086 "product_name": "passthru", 00:23:16.086 "block_size": 512, 00:23:16.086 "num_blocks": 65536, 00:23:16.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.086 "assigned_rate_limits": { 00:23:16.086 "rw_ios_per_sec": 0, 00:23:16.086 "rw_mbytes_per_sec": 0, 00:23:16.086 "r_mbytes_per_sec": 0, 00:23:16.086 "w_mbytes_per_sec": 0 00:23:16.086 }, 00:23:16.086 "claimed": true, 00:23:16.086 "claim_type": "exclusive_write", 00:23:16.086 "zoned": false, 00:23:16.086 "supported_io_types": { 00:23:16.086 "read": true, 00:23:16.086 "write": true, 00:23:16.086 "unmap": true, 00:23:16.086 "flush": true, 00:23:16.086 "reset": true, 00:23:16.086 "nvme_admin": false, 00:23:16.086 "nvme_io": false, 00:23:16.086 "nvme_io_md": false, 00:23:16.086 "write_zeroes": true, 00:23:16.086 "zcopy": true, 00:23:16.086 "get_zone_info": false, 00:23:16.086 "zone_management": false, 00:23:16.086 "zone_append": false, 00:23:16.086 "compare": false, 00:23:16.086 "compare_and_write": false, 00:23:16.086 "abort": true, 00:23:16.086 "seek_hole": false, 00:23:16.086 "seek_data": false, 00:23:16.086 "copy": true, 00:23:16.086 "nvme_iov_md": false 00:23:16.086 }, 00:23:16.086 "memory_domains": [ 00:23:16.086 { 00:23:16.086 "dma_device_id": "system", 00:23:16.086 "dma_device_type": 1 00:23:16.086 }, 00:23:16.086 { 00:23:16.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.086 "dma_device_type": 2 00:23:16.086 } 00:23:16.086 ], 00:23:16.086 "driver_specific": { 00:23:16.086 "passthru": { 00:23:16.086 "name": "pt2", 00:23:16.086 "base_bdev_name": "malloc2" 00:23:16.086 } 00:23:16.086 } 00:23:16.086 }' 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.086 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.345 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.345 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.345 06:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.345 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.345 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.345 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.345 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.345 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.604 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.604 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.604 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.604 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.604 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:16.863 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.863 "name": "pt3", 00:23:16.863 "aliases": [ 00:23:16.863 "00000000-0000-0000-0000-000000000003" 00:23:16.863 ], 00:23:16.863 "product_name": "passthru", 00:23:16.863 "block_size": 512, 00:23:16.863 "num_blocks": 65536, 00:23:16.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:16.863 "assigned_rate_limits": { 00:23:16.863 "rw_ios_per_sec": 0, 00:23:16.863 "rw_mbytes_per_sec": 0, 00:23:16.863 "r_mbytes_per_sec": 0, 00:23:16.863 "w_mbytes_per_sec": 0 00:23:16.863 }, 00:23:16.863 "claimed": true, 00:23:16.863 "claim_type": "exclusive_write", 00:23:16.863 "zoned": false, 00:23:16.863 "supported_io_types": { 00:23:16.863 "read": true, 00:23:16.863 "write": true, 00:23:16.863 "unmap": true, 00:23:16.863 "flush": true, 00:23:16.863 "reset": true, 00:23:16.863 "nvme_admin": false, 00:23:16.863 "nvme_io": false, 00:23:16.863 "nvme_io_md": false, 00:23:16.863 "write_zeroes": true, 00:23:16.863 "zcopy": true, 00:23:16.863 "get_zone_info": false, 00:23:16.863 "zone_management": false, 00:23:16.863 "zone_append": false, 00:23:16.863 "compare": false, 00:23:16.863 "compare_and_write": false, 00:23:16.863 "abort": true, 00:23:16.863 "seek_hole": false, 00:23:16.863 "seek_data": false, 00:23:16.863 "copy": true, 00:23:16.863 "nvme_iov_md": false 00:23:16.863 }, 00:23:16.863 "memory_domains": [ 00:23:16.863 { 00:23:16.864 "dma_device_id": "system", 00:23:16.864 "dma_device_type": 1 00:23:16.864 }, 00:23:16.864 { 00:23:16.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.864 "dma_device_type": 2 00:23:16.864 } 00:23:16.864 ], 00:23:16.864 "driver_specific": { 00:23:16.864 "passthru": { 00:23:16.864 "name": "pt3", 00:23:16.864 "base_bdev_name": "malloc3" 00:23:16.864 } 00:23:16.864 } 00:23:16.864 }' 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.864 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.123 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:17.383 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.383 "name": "pt4", 00:23:17.383 "aliases": [ 00:23:17.383 "00000000-0000-0000-0000-000000000004" 00:23:17.383 ], 00:23:17.383 "product_name": "passthru", 00:23:17.383 "block_size": 512, 00:23:17.383 "num_blocks": 65536, 00:23:17.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:17.383 "assigned_rate_limits": { 00:23:17.383 "rw_ios_per_sec": 0, 00:23:17.383 "rw_mbytes_per_sec": 0, 00:23:17.383 "r_mbytes_per_sec": 0, 00:23:17.383 "w_mbytes_per_sec": 0 00:23:17.383 }, 00:23:17.383 "claimed": true, 00:23:17.383 "claim_type": "exclusive_write", 00:23:17.383 "zoned": false, 00:23:17.383 "supported_io_types": { 00:23:17.383 "read": true, 00:23:17.383 "write": true, 00:23:17.383 "unmap": true, 00:23:17.383 "flush": true, 00:23:17.383 "reset": true, 00:23:17.383 "nvme_admin": false, 00:23:17.383 "nvme_io": false, 00:23:17.383 "nvme_io_md": false, 00:23:17.383 "write_zeroes": true, 00:23:17.383 "zcopy": true, 00:23:17.383 "get_zone_info": false, 00:23:17.383 "zone_management": false, 00:23:17.383 "zone_append": false, 00:23:17.383 "compare": false, 00:23:17.383 "compare_and_write": false, 00:23:17.383 "abort": true, 00:23:17.383 "seek_hole": false, 00:23:17.383 "seek_data": false, 00:23:17.383 "copy": true, 00:23:17.383 "nvme_iov_md": false 00:23:17.383 }, 00:23:17.383 "memory_domains": [ 00:23:17.383 { 00:23:17.383 "dma_device_id": "system", 00:23:17.383 "dma_device_type": 1 00:23:17.383 }, 00:23:17.383 { 00:23:17.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.383 "dma_device_type": 2 00:23:17.383 } 00:23:17.383 ], 00:23:17.383 "driver_specific": { 00:23:17.383 "passthru": { 00:23:17.383 "name": "pt4", 00:23:17.383 "base_bdev_name": "malloc4" 00:23:17.383 } 00:23:17.383 } 00:23:17.383 }' 00:23:17.383 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.383 06:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.383 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:17.383 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.383 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.383 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.383 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.383 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.643 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.643 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.643 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.643 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.643 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.643 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:23:17.643 [2024-08-13 06:18:19.426511] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 6523f3ba-148d-470c-a868-cc1c083bdb3b '!=' 6523f3ba-148d-470c-a868-cc1c083bdb3b ']' 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:17.903 [2024-08-13 06:18:19.594221] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.903 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.162 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.162 "name": "raid_bdev1", 00:23:18.162 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:18.162 "strip_size_kb": 64, 00:23:18.162 "state": "online", 00:23:18.162 "raid_level": "raid5f", 00:23:18.162 "superblock": true, 00:23:18.162 "num_base_bdevs": 4, 00:23:18.162 "num_base_bdevs_discovered": 3, 00:23:18.162 "num_base_bdevs_operational": 3, 00:23:18.162 "base_bdevs_list": [ 00:23:18.162 { 00:23:18.162 "name": null, 00:23:18.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.162 "is_configured": false, 00:23:18.162 "data_offset": 2048, 00:23:18.162 "data_size": 63488 00:23:18.162 }, 00:23:18.162 { 00:23:18.162 "name": "pt2", 00:23:18.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.162 "is_configured": true, 00:23:18.162 "data_offset": 2048, 00:23:18.162 "data_size": 63488 00:23:18.162 }, 00:23:18.162 { 00:23:18.162 "name": "pt3", 00:23:18.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:18.162 "is_configured": true, 00:23:18.162 "data_offset": 2048, 00:23:18.162 "data_size": 63488 00:23:18.162 }, 00:23:18.162 { 00:23:18.162 "name": "pt4", 00:23:18.162 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:18.162 "is_configured": true, 00:23:18.162 "data_offset": 2048, 00:23:18.162 "data_size": 63488 00:23:18.162 } 00:23:18.162 ] 00:23:18.162 }' 00:23:18.162 06:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.162 06:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.731 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:18.731 [2024-08-13 06:18:20.432740] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.731 [2024-08-13 06:18:20.432780] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.731 [2024-08-13 06:18:20.432854] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.731 [2024-08-13 06:18:20.432926] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.731 [2024-08-13 06:18:20.432934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:23:18.731 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.731 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:23:18.991 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:23:18.991 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:23:18.991 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:18.991 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:18.991 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:19.251 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:19.251 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:19.251 06:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:23:19.511 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.793 [2024-08-13 06:18:21.414998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.793 [2024-08-13 06:18:21.415063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.793 [2024-08-13 06:18:21.415081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:19.793 [2024-08-13 06:18:21.415089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.793 [2024-08-13 06:18:21.417006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.793 [2024-08-13 06:18:21.417055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.793 [2024-08-13 06:18:21.417125] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:19.793 [2024-08-13 06:18:21.417166] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.793 pt2 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.793 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.065 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.065 "name": "raid_bdev1", 00:23:20.065 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:20.065 "strip_size_kb": 64, 00:23:20.065 "state": "configuring", 00:23:20.065 "raid_level": "raid5f", 00:23:20.065 "superblock": true, 00:23:20.065 "num_base_bdevs": 4, 00:23:20.065 "num_base_bdevs_discovered": 1, 00:23:20.065 "num_base_bdevs_operational": 3, 00:23:20.065 "base_bdevs_list": [ 00:23:20.065 { 00:23:20.065 "name": null, 00:23:20.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.065 "is_configured": false, 00:23:20.065 "data_offset": 2048, 00:23:20.065 "data_size": 63488 00:23:20.065 }, 00:23:20.065 { 00:23:20.065 "name": "pt2", 00:23:20.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.065 "is_configured": true, 00:23:20.065 "data_offset": 2048, 00:23:20.065 "data_size": 63488 00:23:20.065 }, 00:23:20.065 { 00:23:20.065 "name": null, 00:23:20.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:20.065 "is_configured": false, 00:23:20.065 "data_offset": 2048, 00:23:20.066 "data_size": 63488 00:23:20.066 }, 00:23:20.066 { 00:23:20.066 "name": null, 00:23:20.066 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:20.066 "is_configured": false, 00:23:20.066 "data_offset": 2048, 00:23:20.066 "data_size": 63488 00:23:20.066 } 00:23:20.066 ] 00:23:20.066 }' 00:23:20.066 06:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.066 06:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:20.635 [2024-08-13 06:18:22.365501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:20.635 [2024-08-13 06:18:22.365567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.635 [2024-08-13 06:18:22.365588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:20.635 [2024-08-13 06:18:22.365597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.635 [2024-08-13 06:18:22.365998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.635 [2024-08-13 06:18:22.366023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:20.635 [2024-08-13 06:18:22.366110] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:20.635 [2024-08-13 06:18:22.366139] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:20.635 pt3 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.635 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.895 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.895 "name": "raid_bdev1", 00:23:20.895 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:20.895 "strip_size_kb": 64, 00:23:20.895 "state": "configuring", 00:23:20.895 "raid_level": "raid5f", 00:23:20.895 "superblock": true, 00:23:20.895 "num_base_bdevs": 4, 00:23:20.895 "num_base_bdevs_discovered": 2, 00:23:20.895 "num_base_bdevs_operational": 3, 00:23:20.895 "base_bdevs_list": [ 00:23:20.895 { 00:23:20.895 "name": null, 00:23:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.895 "is_configured": false, 00:23:20.895 "data_offset": 2048, 00:23:20.895 "data_size": 63488 00:23:20.895 }, 00:23:20.895 { 00:23:20.895 "name": "pt2", 00:23:20.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.895 "is_configured": true, 00:23:20.895 "data_offset": 2048, 00:23:20.895 "data_size": 63488 00:23:20.895 }, 00:23:20.895 { 00:23:20.895 "name": "pt3", 00:23:20.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:20.895 "is_configured": true, 00:23:20.895 "data_offset": 2048, 00:23:20.895 "data_size": 63488 00:23:20.895 }, 00:23:20.895 { 00:23:20.895 "name": null, 00:23:20.895 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:20.895 "is_configured": false, 00:23:20.895 "data_offset": 2048, 00:23:20.895 "data_size": 63488 00:23:20.895 } 00:23:20.895 ] 00:23:20.895 }' 00:23:20.895 06:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.895 06:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.465 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:23:21.465 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:23:21.465 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:21.465 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:21.725 [2024-08-13 06:18:23.267900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:21.725 [2024-08-13 06:18:23.267959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.725 [2024-08-13 06:18:23.267983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:21.725 [2024-08-13 06:18:23.267993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.725 [2024-08-13 06:18:23.268383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.725 [2024-08-13 06:18:23.268409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:21.725 [2024-08-13 06:18:23.268482] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:21.725 [2024-08-13 06:18:23.268502] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:21.725 [2024-08-13 06:18:23.268601] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:23:21.725 [2024-08-13 06:18:23.268616] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:21.725 [2024-08-13 06:18:23.268827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:23:21.725 [2024-08-13 06:18:23.269331] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:23:21.725 [2024-08-13 06:18:23.269357] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:23:21.725 [2024-08-13 06:18:23.269554] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.725 pt4 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.725 "name": "raid_bdev1", 00:23:21.725 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:21.725 "strip_size_kb": 64, 00:23:21.725 "state": "online", 00:23:21.725 "raid_level": "raid5f", 00:23:21.725 "superblock": true, 00:23:21.725 "num_base_bdevs": 4, 00:23:21.725 "num_base_bdevs_discovered": 3, 00:23:21.725 "num_base_bdevs_operational": 3, 00:23:21.725 "base_bdevs_list": [ 00:23:21.725 { 00:23:21.725 "name": null, 00:23:21.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.725 "is_configured": false, 00:23:21.725 "data_offset": 2048, 00:23:21.725 "data_size": 63488 00:23:21.725 }, 00:23:21.725 { 00:23:21.725 "name": "pt2", 00:23:21.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.725 "is_configured": true, 00:23:21.725 "data_offset": 2048, 00:23:21.725 "data_size": 63488 00:23:21.725 }, 00:23:21.725 { 00:23:21.725 "name": "pt3", 00:23:21.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:21.725 "is_configured": true, 00:23:21.725 "data_offset": 2048, 00:23:21.725 "data_size": 63488 00:23:21.725 }, 00:23:21.725 { 00:23:21.725 "name": "pt4", 00:23:21.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:21.725 "is_configured": true, 00:23:21.725 "data_offset": 2048, 00:23:21.725 "data_size": 63488 00:23:21.725 } 00:23:21.725 ] 00:23:21.725 }' 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.725 06:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.294 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.553 [2024-08-13 06:18:24.214292] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.553 [2024-08-13 06:18:24.214322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.553 [2024-08-13 06:18:24.214393] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.553 [2024-08-13 06:18:24.214461] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.553 [2024-08-13 06:18:24.214473] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:23:22.553 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.553 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:23:22.814 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:23:22.814 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:23:22.814 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:23:22.814 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:23:22.814 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:23.074 [2024-08-13 06:18:24.789331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:23.074 [2024-08-13 06:18:24.789396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.074 [2024-08-13 06:18:24.789412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:23.074 [2024-08-13 06:18:24.789422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.074 [2024-08-13 06:18:24.791587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.074 [2024-08-13 06:18:24.791631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:23.074 [2024-08-13 06:18:24.791702] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:23.074 [2024-08-13 06:18:24.791746] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:23.074 [2024-08-13 06:18:24.791869] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:23.074 [2024-08-13 06:18:24.791892] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.074 [2024-08-13 06:18:24.791907] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:23:23.074 [2024-08-13 06:18:24.791940] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:23.074 [2024-08-13 06:18:24.792040] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:23.074 pt1 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.074 06:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.334 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.334 "name": "raid_bdev1", 00:23:23.334 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:23.334 "strip_size_kb": 64, 00:23:23.334 "state": "configuring", 00:23:23.334 "raid_level": "raid5f", 00:23:23.334 "superblock": true, 00:23:23.334 "num_base_bdevs": 4, 00:23:23.334 "num_base_bdevs_discovered": 2, 00:23:23.334 "num_base_bdevs_operational": 3, 00:23:23.334 "base_bdevs_list": [ 00:23:23.334 { 00:23:23.334 "name": null, 00:23:23.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.334 "is_configured": false, 00:23:23.334 "data_offset": 2048, 00:23:23.334 "data_size": 63488 00:23:23.334 }, 00:23:23.334 { 00:23:23.334 "name": "pt2", 00:23:23.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.334 "is_configured": true, 00:23:23.334 "data_offset": 2048, 00:23:23.334 "data_size": 63488 00:23:23.334 }, 00:23:23.334 { 00:23:23.334 "name": "pt3", 00:23:23.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:23.334 "is_configured": true, 00:23:23.334 "data_offset": 2048, 00:23:23.334 "data_size": 63488 00:23:23.334 }, 00:23:23.334 { 00:23:23.334 "name": null, 00:23:23.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:23.334 "is_configured": false, 00:23:23.334 "data_offset": 2048, 00:23:23.335 "data_size": 63488 00:23:23.335 } 00:23:23.335 ] 00:23:23.335 }' 00:23:23.335 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.335 06:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.904 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:23.904 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:24.163 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:23:24.163 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:24.423 [2024-08-13 06:18:25.983331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:24.423 [2024-08-13 06:18:25.983390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.423 [2024-08-13 06:18:25.983410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:24.423 [2024-08-13 06:18:25.983418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.423 [2024-08-13 06:18:25.983789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.423 [2024-08-13 06:18:25.983814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:24.423 [2024-08-13 06:18:25.983882] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:24.423 [2024-08-13 06:18:25.983910] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:24.424 [2024-08-13 06:18:25.984024] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:23:24.424 [2024-08-13 06:18:25.984051] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:24.424 [2024-08-13 06:18:25.984276] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:23:24.424 [2024-08-13 06:18:25.984767] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:23:24.424 [2024-08-13 06:18:25.984790] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:23:24.424 [2024-08-13 06:18:25.984950] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.424 pt4 00:23:24.424 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:24.424 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:24.424 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:24.424 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:24.424 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:24.424 06:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.424 "name": "raid_bdev1", 00:23:24.424 "uuid": "6523f3ba-148d-470c-a868-cc1c083bdb3b", 00:23:24.424 "strip_size_kb": 64, 00:23:24.424 "state": "online", 00:23:24.424 "raid_level": "raid5f", 00:23:24.424 "superblock": true, 00:23:24.424 "num_base_bdevs": 4, 00:23:24.424 "num_base_bdevs_discovered": 3, 00:23:24.424 "num_base_bdevs_operational": 3, 00:23:24.424 "base_bdevs_list": [ 00:23:24.424 { 00:23:24.424 "name": null, 00:23:24.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.424 "is_configured": false, 00:23:24.424 "data_offset": 2048, 00:23:24.424 "data_size": 63488 00:23:24.424 }, 00:23:24.424 { 00:23:24.424 "name": "pt2", 00:23:24.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.424 "is_configured": true, 00:23:24.424 "data_offset": 2048, 00:23:24.424 "data_size": 63488 00:23:24.424 }, 00:23:24.424 { 00:23:24.424 "name": "pt3", 00:23:24.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:24.424 "is_configured": true, 00:23:24.424 "data_offset": 2048, 00:23:24.424 "data_size": 63488 00:23:24.424 }, 00:23:24.424 { 00:23:24.424 "name": "pt4", 00:23:24.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:24.424 "is_configured": true, 00:23:24.424 "data_offset": 2048, 00:23:24.424 "data_size": 63488 00:23:24.424 } 00:23:24.424 ] 00:23:24.424 }' 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.424 06:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.993 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:24.993 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:25.253 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:23:25.253 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:23:25.253 06:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:25.514 [2024-08-13 06:18:27.169600] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 6523f3ba-148d-470c-a868-cc1c083bdb3b '!=' 6523f3ba-148d-470c-a868-cc1c083bdb3b ']' 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 103000 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 103000 ']' 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 103000 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103000 00:23:25.514 killing process with pid 103000 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103000' 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 103000 00:23:25.514 [2024-08-13 06:18:27.243937] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.514 [2024-08-13 06:18:27.244019] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.514 [2024-08-13 06:18:27.244101] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:25.514 [2024-08-13 06:18:27.244114] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:23:25.514 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 103000 00:23:25.514 [2024-08-13 06:18:27.287687] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.774 06:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:23:25.774 00:23:25.774 real 0m21.728s 00:23:25.774 user 0m39.986s 00:23:25.774 sys 0m3.604s 00:23:25.774 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:25.774 06:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.774 ************************************ 00:23:25.774 END TEST raid5f_superblock_test 00:23:25.774 ************************************ 00:23:26.034 06:18:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # '[' true = true ']' 00:23:26.035 06:18:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:26.035 06:18:27 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:23:26.035 06:18:27 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.035 06:18:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:26.035 ************************************ 00:23:26.035 START TEST raid5f_rebuild_test 00:23:26.035 ************************************ 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 false false true 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=103775 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 103775 /var/tmp/spdk-raid.sock 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 103775 ']' 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.035 06:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.035 [2024-08-13 06:18:27.718907] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:23:26.035 [2024-08-13 06:18:27.719053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103775 ] 00:23:26.035 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:26.035 Zero copy mechanism will not be used. 00:23:26.295 [2024-08-13 06:18:27.865338] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.295 [2024-08-13 06:18:27.910944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.295 [2024-08-13 06:18:27.953468] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.295 [2024-08-13 06:18:27.953513] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.865 06:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.865 06:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:23:26.865 06:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:26.865 06:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:27.125 BaseBdev1_malloc 00:23:27.125 06:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:27.385 [2024-08-13 06:18:28.929580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:27.385 [2024-08-13 06:18:28.929654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.385 [2024-08-13 06:18:28.929680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:27.385 [2024-08-13 06:18:28.929692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.385 [2024-08-13 06:18:28.931720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.385 [2024-08-13 06:18:28.931765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:27.385 BaseBdev1 00:23:27.385 06:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:27.385 06:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:27.385 BaseBdev2_malloc 00:23:27.385 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:27.645 [2024-08-13 06:18:29.329356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:27.645 [2024-08-13 06:18:29.329411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.645 [2024-08-13 06:18:29.329430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:27.645 [2024-08-13 06:18:29.329440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.645 [2024-08-13 06:18:29.331409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.645 [2024-08-13 06:18:29.331451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:27.645 BaseBdev2 00:23:27.645 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:27.645 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:27.904 BaseBdev3_malloc 00:23:27.904 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:28.164 [2024-08-13 06:18:29.745270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:28.164 [2024-08-13 06:18:29.745328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.164 [2024-08-13 06:18:29.745347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:28.164 [2024-08-13 06:18:29.745356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.164 [2024-08-13 06:18:29.747296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.164 [2024-08-13 06:18:29.747337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:28.164 BaseBdev3 00:23:28.164 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:28.164 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:28.164 BaseBdev4_malloc 00:23:28.424 06:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:28.424 [2024-08-13 06:18:30.153259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:28.424 [2024-08-13 06:18:30.153317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.424 [2024-08-13 06:18:30.153335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:28.424 [2024-08-13 06:18:30.153348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.424 [2024-08-13 06:18:30.155360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.424 [2024-08-13 06:18:30.155451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:28.424 BaseBdev4 00:23:28.424 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:28.683 spare_malloc 00:23:28.683 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:28.943 spare_delay 00:23:28.943 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:29.204 [2024-08-13 06:18:30.752826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:29.204 [2024-08-13 06:18:30.752893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.204 [2024-08-13 06:18:30.752913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:29.204 [2024-08-13 06:18:30.752924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.204 [2024-08-13 06:18:30.754969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.204 [2024-08-13 06:18:30.755094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:29.204 spare 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:29.204 [2024-08-13 06:18:30.948539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:29.204 [2024-08-13 06:18:30.950289] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:29.204 [2024-08-13 06:18:30.950385] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:29.204 [2024-08-13 06:18:30.950445] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:29.204 [2024-08-13 06:18:30.950576] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:23:29.204 [2024-08-13 06:18:30.950617] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:29.204 [2024-08-13 06:18:30.950889] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:29.204 [2024-08-13 06:18:30.951367] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:23:29.204 [2024-08-13 06:18:30.951413] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:23:29.204 [2024-08-13 06:18:30.951589] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.204 06:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.464 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.464 "name": "raid_bdev1", 00:23:29.464 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:29.464 "strip_size_kb": 64, 00:23:29.464 "state": "online", 00:23:29.464 "raid_level": "raid5f", 00:23:29.464 "superblock": false, 00:23:29.464 "num_base_bdevs": 4, 00:23:29.464 "num_base_bdevs_discovered": 4, 00:23:29.464 "num_base_bdevs_operational": 4, 00:23:29.464 "base_bdevs_list": [ 00:23:29.464 { 00:23:29.464 "name": "BaseBdev1", 00:23:29.464 "uuid": "f1b8e9af-885d-54a4-b24a-cfce572eef64", 00:23:29.464 "is_configured": true, 00:23:29.464 "data_offset": 0, 00:23:29.464 "data_size": 65536 00:23:29.464 }, 00:23:29.464 { 00:23:29.464 "name": "BaseBdev2", 00:23:29.464 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:29.464 "is_configured": true, 00:23:29.464 "data_offset": 0, 00:23:29.464 "data_size": 65536 00:23:29.464 }, 00:23:29.464 { 00:23:29.464 "name": "BaseBdev3", 00:23:29.464 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:29.464 "is_configured": true, 00:23:29.464 "data_offset": 0, 00:23:29.464 "data_size": 65536 00:23:29.464 }, 00:23:29.464 { 00:23:29.464 "name": "BaseBdev4", 00:23:29.464 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:29.464 "is_configured": true, 00:23:29.464 "data_offset": 0, 00:23:29.464 "data_size": 65536 00:23:29.464 } 00:23:29.464 ] 00:23:29.464 }' 00:23:29.464 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.464 06:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.033 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:30.033 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:30.293 [2024-08-13 06:18:31.863917] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.293 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=196608 00:23:30.293 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.293 06:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:30.293 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:30.553 [2024-08-13 06:18:32.255125] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:23:30.553 /dev/nbd0 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:30.553 1+0 records in 00:23:30.553 1+0 records out 00:23:30.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047209 s, 8.7 MB/s 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 192 00:23:30.553 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:31.123 512+0 records in 00:23:31.124 512+0 records out 00:23:31.124 100663296 bytes (101 MB, 96 MiB) copied, 0.398284 s, 253 MB/s 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:31.124 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:31.384 [2024-08-13 06:18:32.934253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:31.384 06:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:31.384 [2024-08-13 06:18:33.126050] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.384 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.644 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.644 "name": "raid_bdev1", 00:23:31.644 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:31.644 "strip_size_kb": 64, 00:23:31.644 "state": "online", 00:23:31.644 "raid_level": "raid5f", 00:23:31.644 "superblock": false, 00:23:31.644 "num_base_bdevs": 4, 00:23:31.644 "num_base_bdevs_discovered": 3, 00:23:31.644 "num_base_bdevs_operational": 3, 00:23:31.644 "base_bdevs_list": [ 00:23:31.644 { 00:23:31.644 "name": null, 00:23:31.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.644 "is_configured": false, 00:23:31.644 "data_offset": 0, 00:23:31.644 "data_size": 65536 00:23:31.644 }, 00:23:31.644 { 00:23:31.644 "name": "BaseBdev2", 00:23:31.644 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:31.644 "is_configured": true, 00:23:31.644 "data_offset": 0, 00:23:31.644 "data_size": 65536 00:23:31.644 }, 00:23:31.644 { 00:23:31.644 "name": "BaseBdev3", 00:23:31.644 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:31.644 "is_configured": true, 00:23:31.644 "data_offset": 0, 00:23:31.644 "data_size": 65536 00:23:31.644 }, 00:23:31.644 { 00:23:31.644 "name": "BaseBdev4", 00:23:31.644 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:31.644 "is_configured": true, 00:23:31.644 "data_offset": 0, 00:23:31.644 "data_size": 65536 00:23:31.644 } 00:23:31.644 ] 00:23:31.644 }' 00:23:31.644 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.644 06:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.214 06:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.473 [2024-08-13 06:18:34.052418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.473 [2024-08-13 06:18:34.055887] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:23:32.473 [2024-08-13 06:18:34.058028] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.473 06:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.412 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.672 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.672 "name": "raid_bdev1", 00:23:33.672 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:33.672 "strip_size_kb": 64, 00:23:33.672 "state": "online", 00:23:33.672 "raid_level": "raid5f", 00:23:33.672 "superblock": false, 00:23:33.672 "num_base_bdevs": 4, 00:23:33.672 "num_base_bdevs_discovered": 4, 00:23:33.672 "num_base_bdevs_operational": 4, 00:23:33.672 "process": { 00:23:33.672 "type": "rebuild", 00:23:33.672 "target": "spare", 00:23:33.672 "progress": { 00:23:33.672 "blocks": 23040, 00:23:33.672 "percent": 11 00:23:33.672 } 00:23:33.672 }, 00:23:33.672 "base_bdevs_list": [ 00:23:33.672 { 00:23:33.672 "name": "spare", 00:23:33.672 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:33.672 "is_configured": true, 00:23:33.672 "data_offset": 0, 00:23:33.672 "data_size": 65536 00:23:33.672 }, 00:23:33.672 { 00:23:33.672 "name": "BaseBdev2", 00:23:33.672 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:33.672 "is_configured": true, 00:23:33.672 "data_offset": 0, 00:23:33.672 "data_size": 65536 00:23:33.672 }, 00:23:33.672 { 00:23:33.672 "name": "BaseBdev3", 00:23:33.672 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:33.672 "is_configured": true, 00:23:33.672 "data_offset": 0, 00:23:33.672 "data_size": 65536 00:23:33.672 }, 00:23:33.672 { 00:23:33.672 "name": "BaseBdev4", 00:23:33.672 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:33.672 "is_configured": true, 00:23:33.672 "data_offset": 0, 00:23:33.672 "data_size": 65536 00:23:33.672 } 00:23:33.672 ] 00:23:33.672 }' 00:23:33.672 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:33.672 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.672 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:33.672 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.672 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:33.932 [2024-08-13 06:18:35.556133] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.932 [2024-08-13 06:18:35.564757] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.932 [2024-08-13 06:18:35.564819] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.932 [2024-08-13 06:18:35.564836] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.932 [2024-08-13 06:18:35.564845] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.932 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.192 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.192 "name": "raid_bdev1", 00:23:34.192 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:34.192 "strip_size_kb": 64, 00:23:34.192 "state": "online", 00:23:34.192 "raid_level": "raid5f", 00:23:34.192 "superblock": false, 00:23:34.192 "num_base_bdevs": 4, 00:23:34.192 "num_base_bdevs_discovered": 3, 00:23:34.192 "num_base_bdevs_operational": 3, 00:23:34.192 "base_bdevs_list": [ 00:23:34.192 { 00:23:34.192 "name": null, 00:23:34.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.192 "is_configured": false, 00:23:34.192 "data_offset": 0, 00:23:34.192 "data_size": 65536 00:23:34.192 }, 00:23:34.192 { 00:23:34.192 "name": "BaseBdev2", 00:23:34.192 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:34.192 "is_configured": true, 00:23:34.192 "data_offset": 0, 00:23:34.192 "data_size": 65536 00:23:34.192 }, 00:23:34.192 { 00:23:34.192 "name": "BaseBdev3", 00:23:34.192 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:34.192 "is_configured": true, 00:23:34.192 "data_offset": 0, 00:23:34.192 "data_size": 65536 00:23:34.192 }, 00:23:34.192 { 00:23:34.192 "name": "BaseBdev4", 00:23:34.192 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:34.192 "is_configured": true, 00:23:34.192 "data_offset": 0, 00:23:34.192 "data_size": 65536 00:23:34.192 } 00:23:34.192 ] 00:23:34.192 }' 00:23:34.192 06:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.192 06:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.762 "name": "raid_bdev1", 00:23:34.762 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:34.762 "strip_size_kb": 64, 00:23:34.762 "state": "online", 00:23:34.762 "raid_level": "raid5f", 00:23:34.762 "superblock": false, 00:23:34.762 "num_base_bdevs": 4, 00:23:34.762 "num_base_bdevs_discovered": 3, 00:23:34.762 "num_base_bdevs_operational": 3, 00:23:34.762 "base_bdevs_list": [ 00:23:34.762 { 00:23:34.762 "name": null, 00:23:34.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.762 "is_configured": false, 00:23:34.762 "data_offset": 0, 00:23:34.762 "data_size": 65536 00:23:34.762 }, 00:23:34.762 { 00:23:34.762 "name": "BaseBdev2", 00:23:34.762 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:34.762 "is_configured": true, 00:23:34.762 "data_offset": 0, 00:23:34.762 "data_size": 65536 00:23:34.762 }, 00:23:34.762 { 00:23:34.762 "name": "BaseBdev3", 00:23:34.762 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:34.762 "is_configured": true, 00:23:34.762 "data_offset": 0, 00:23:34.762 "data_size": 65536 00:23:34.762 }, 00:23:34.762 { 00:23:34.762 "name": "BaseBdev4", 00:23:34.762 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:34.762 "is_configured": true, 00:23:34.762 "data_offset": 0, 00:23:34.762 "data_size": 65536 00:23:34.762 } 00:23:34.762 ] 00:23:34.762 }' 00:23:34.762 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.022 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:35.022 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.022 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:35.022 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:35.022 [2024-08-13 06:18:36.788213] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.022 [2024-08-13 06:18:36.791647] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:23:35.022 [2024-08-13 06:18:36.793693] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:35.022 06:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.402 06:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.402 "name": "raid_bdev1", 00:23:36.402 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:36.402 "strip_size_kb": 64, 00:23:36.402 "state": "online", 00:23:36.402 "raid_level": "raid5f", 00:23:36.402 "superblock": false, 00:23:36.402 "num_base_bdevs": 4, 00:23:36.402 "num_base_bdevs_discovered": 4, 00:23:36.402 "num_base_bdevs_operational": 4, 00:23:36.402 "process": { 00:23:36.402 "type": "rebuild", 00:23:36.402 "target": "spare", 00:23:36.402 "progress": { 00:23:36.402 "blocks": 21120, 00:23:36.402 "percent": 10 00:23:36.402 } 00:23:36.402 }, 00:23:36.402 "base_bdevs_list": [ 00:23:36.402 { 00:23:36.402 "name": "spare", 00:23:36.402 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:36.402 "is_configured": true, 00:23:36.402 "data_offset": 0, 00:23:36.402 "data_size": 65536 00:23:36.402 }, 00:23:36.402 { 00:23:36.402 "name": "BaseBdev2", 00:23:36.402 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:36.402 "is_configured": true, 00:23:36.402 "data_offset": 0, 00:23:36.402 "data_size": 65536 00:23:36.402 }, 00:23:36.402 { 00:23:36.402 "name": "BaseBdev3", 00:23:36.402 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:36.402 "is_configured": true, 00:23:36.402 "data_offset": 0, 00:23:36.402 "data_size": 65536 00:23:36.402 }, 00:23:36.402 { 00:23:36.402 "name": "BaseBdev4", 00:23:36.402 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:36.402 "is_configured": true, 00:23:36.402 "data_offset": 0, 00:23:36.402 "data_size": 65536 00:23:36.402 } 00:23:36.402 ] 00:23:36.402 }' 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1070 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.402 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.662 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.662 "name": "raid_bdev1", 00:23:36.662 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:36.662 "strip_size_kb": 64, 00:23:36.662 "state": "online", 00:23:36.662 "raid_level": "raid5f", 00:23:36.662 "superblock": false, 00:23:36.662 "num_base_bdevs": 4, 00:23:36.662 "num_base_bdevs_discovered": 4, 00:23:36.662 "num_base_bdevs_operational": 4, 00:23:36.662 "process": { 00:23:36.662 "type": "rebuild", 00:23:36.662 "target": "spare", 00:23:36.662 "progress": { 00:23:36.662 "blocks": 26880, 00:23:36.662 "percent": 13 00:23:36.662 } 00:23:36.662 }, 00:23:36.662 "base_bdevs_list": [ 00:23:36.662 { 00:23:36.662 "name": "spare", 00:23:36.662 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:36.662 "is_configured": true, 00:23:36.662 "data_offset": 0, 00:23:36.662 "data_size": 65536 00:23:36.662 }, 00:23:36.662 { 00:23:36.662 "name": "BaseBdev2", 00:23:36.662 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:36.662 "is_configured": true, 00:23:36.662 "data_offset": 0, 00:23:36.662 "data_size": 65536 00:23:36.662 }, 00:23:36.662 { 00:23:36.662 "name": "BaseBdev3", 00:23:36.662 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:36.662 "is_configured": true, 00:23:36.662 "data_offset": 0, 00:23:36.662 "data_size": 65536 00:23:36.662 }, 00:23:36.662 { 00:23:36.662 "name": "BaseBdev4", 00:23:36.662 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:36.662 "is_configured": true, 00:23:36.662 "data_offset": 0, 00:23:36.662 "data_size": 65536 00:23:36.662 } 00:23:36.662 ] 00:23:36.662 }' 00:23:36.662 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.662 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.662 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.662 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.662 06:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.041 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.041 "name": "raid_bdev1", 00:23:38.041 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:38.041 "strip_size_kb": 64, 00:23:38.041 "state": "online", 00:23:38.042 "raid_level": "raid5f", 00:23:38.042 "superblock": false, 00:23:38.042 "num_base_bdevs": 4, 00:23:38.042 "num_base_bdevs_discovered": 4, 00:23:38.042 "num_base_bdevs_operational": 4, 00:23:38.042 "process": { 00:23:38.042 "type": "rebuild", 00:23:38.042 "target": "spare", 00:23:38.042 "progress": { 00:23:38.042 "blocks": 51840, 00:23:38.042 "percent": 26 00:23:38.042 } 00:23:38.042 }, 00:23:38.042 "base_bdevs_list": [ 00:23:38.042 { 00:23:38.042 "name": "spare", 00:23:38.042 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:38.042 "is_configured": true, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 65536 00:23:38.042 }, 00:23:38.042 { 00:23:38.042 "name": "BaseBdev2", 00:23:38.042 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:38.042 "is_configured": true, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 65536 00:23:38.042 }, 00:23:38.042 { 00:23:38.042 "name": "BaseBdev3", 00:23:38.042 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:38.042 "is_configured": true, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 65536 00:23:38.042 }, 00:23:38.042 { 00:23:38.042 "name": "BaseBdev4", 00:23:38.042 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:38.042 "is_configured": true, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 65536 00:23:38.042 } 00:23:38.042 ] 00:23:38.042 }' 00:23:38.042 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.042 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.042 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:38.042 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.042 06:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.980 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.239 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.239 "name": "raid_bdev1", 00:23:39.239 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:39.239 "strip_size_kb": 64, 00:23:39.239 "state": "online", 00:23:39.239 "raid_level": "raid5f", 00:23:39.239 "superblock": false, 00:23:39.239 "num_base_bdevs": 4, 00:23:39.239 "num_base_bdevs_discovered": 4, 00:23:39.239 "num_base_bdevs_operational": 4, 00:23:39.239 "process": { 00:23:39.239 "type": "rebuild", 00:23:39.239 "target": "spare", 00:23:39.239 "progress": { 00:23:39.239 "blocks": 76800, 00:23:39.239 "percent": 39 00:23:39.239 } 00:23:39.239 }, 00:23:39.239 "base_bdevs_list": [ 00:23:39.239 { 00:23:39.239 "name": "spare", 00:23:39.239 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:39.239 "is_configured": true, 00:23:39.239 "data_offset": 0, 00:23:39.239 "data_size": 65536 00:23:39.239 }, 00:23:39.239 { 00:23:39.239 "name": "BaseBdev2", 00:23:39.239 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:39.239 "is_configured": true, 00:23:39.239 "data_offset": 0, 00:23:39.239 "data_size": 65536 00:23:39.239 }, 00:23:39.239 { 00:23:39.239 "name": "BaseBdev3", 00:23:39.239 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:39.239 "is_configured": true, 00:23:39.239 "data_offset": 0, 00:23:39.239 "data_size": 65536 00:23:39.239 }, 00:23:39.239 { 00:23:39.239 "name": "BaseBdev4", 00:23:39.240 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:39.240 "is_configured": true, 00:23:39.240 "data_offset": 0, 00:23:39.240 "data_size": 65536 00:23:39.240 } 00:23:39.240 ] 00:23:39.240 }' 00:23:39.240 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:39.240 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.240 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.240 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.240 06:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:40.619 06:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:40.619 06:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.619 06:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.619 06:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:40.619 06:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:40.619 06:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.619 "name": "raid_bdev1", 00:23:40.619 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:40.619 "strip_size_kb": 64, 00:23:40.619 "state": "online", 00:23:40.619 "raid_level": "raid5f", 00:23:40.619 "superblock": false, 00:23:40.619 "num_base_bdevs": 4, 00:23:40.619 "num_base_bdevs_discovered": 4, 00:23:40.619 "num_base_bdevs_operational": 4, 00:23:40.619 "process": { 00:23:40.619 "type": "rebuild", 00:23:40.619 "target": "spare", 00:23:40.619 "progress": { 00:23:40.619 "blocks": 101760, 00:23:40.619 "percent": 51 00:23:40.619 } 00:23:40.619 }, 00:23:40.619 "base_bdevs_list": [ 00:23:40.619 { 00:23:40.619 "name": "spare", 00:23:40.619 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:40.619 "is_configured": true, 00:23:40.619 "data_offset": 0, 00:23:40.619 "data_size": 65536 00:23:40.619 }, 00:23:40.619 { 00:23:40.619 "name": "BaseBdev2", 00:23:40.619 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:40.619 "is_configured": true, 00:23:40.619 "data_offset": 0, 00:23:40.619 "data_size": 65536 00:23:40.619 }, 00:23:40.619 { 00:23:40.619 "name": "BaseBdev3", 00:23:40.619 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:40.619 "is_configured": true, 00:23:40.619 "data_offset": 0, 00:23:40.619 "data_size": 65536 00:23:40.619 }, 00:23:40.619 { 00:23:40.619 "name": "BaseBdev4", 00:23:40.619 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:40.619 "is_configured": true, 00:23:40.619 "data_offset": 0, 00:23:40.619 "data_size": 65536 00:23:40.619 } 00:23:40.619 ] 00:23:40.619 }' 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.619 06:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.557 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.817 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.817 "name": "raid_bdev1", 00:23:41.817 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:41.817 "strip_size_kb": 64, 00:23:41.817 "state": "online", 00:23:41.817 "raid_level": "raid5f", 00:23:41.817 "superblock": false, 00:23:41.817 "num_base_bdevs": 4, 00:23:41.817 "num_base_bdevs_discovered": 4, 00:23:41.817 "num_base_bdevs_operational": 4, 00:23:41.817 "process": { 00:23:41.817 "type": "rebuild", 00:23:41.817 "target": "spare", 00:23:41.817 "progress": { 00:23:41.817 "blocks": 126720, 00:23:41.817 "percent": 64 00:23:41.817 } 00:23:41.817 }, 00:23:41.817 "base_bdevs_list": [ 00:23:41.817 { 00:23:41.817 "name": "spare", 00:23:41.817 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:41.817 "is_configured": true, 00:23:41.817 "data_offset": 0, 00:23:41.817 "data_size": 65536 00:23:41.817 }, 00:23:41.817 { 00:23:41.817 "name": "BaseBdev2", 00:23:41.817 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:41.817 "is_configured": true, 00:23:41.817 "data_offset": 0, 00:23:41.817 "data_size": 65536 00:23:41.817 }, 00:23:41.817 { 00:23:41.817 "name": "BaseBdev3", 00:23:41.817 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:41.817 "is_configured": true, 00:23:41.817 "data_offset": 0, 00:23:41.817 "data_size": 65536 00:23:41.817 }, 00:23:41.817 { 00:23:41.817 "name": "BaseBdev4", 00:23:41.817 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:41.817 "is_configured": true, 00:23:41.817 "data_offset": 0, 00:23:41.817 "data_size": 65536 00:23:41.817 } 00:23:41.817 ] 00:23:41.817 }' 00:23:41.817 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.817 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.817 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.817 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.817 06:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.212 "name": "raid_bdev1", 00:23:43.212 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:43.212 "strip_size_kb": 64, 00:23:43.212 "state": "online", 00:23:43.212 "raid_level": "raid5f", 00:23:43.212 "superblock": false, 00:23:43.212 "num_base_bdevs": 4, 00:23:43.212 "num_base_bdevs_discovered": 4, 00:23:43.212 "num_base_bdevs_operational": 4, 00:23:43.212 "process": { 00:23:43.212 "type": "rebuild", 00:23:43.212 "target": "spare", 00:23:43.212 "progress": { 00:23:43.212 "blocks": 151680, 00:23:43.212 "percent": 77 00:23:43.212 } 00:23:43.212 }, 00:23:43.212 "base_bdevs_list": [ 00:23:43.212 { 00:23:43.212 "name": "spare", 00:23:43.212 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:43.212 "is_configured": true, 00:23:43.212 "data_offset": 0, 00:23:43.212 "data_size": 65536 00:23:43.212 }, 00:23:43.212 { 00:23:43.212 "name": "BaseBdev2", 00:23:43.212 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:43.212 "is_configured": true, 00:23:43.212 "data_offset": 0, 00:23:43.212 "data_size": 65536 00:23:43.212 }, 00:23:43.212 { 00:23:43.212 "name": "BaseBdev3", 00:23:43.212 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:43.212 "is_configured": true, 00:23:43.212 "data_offset": 0, 00:23:43.212 "data_size": 65536 00:23:43.212 }, 00:23:43.212 { 00:23:43.212 "name": "BaseBdev4", 00:23:43.212 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:43.212 "is_configured": true, 00:23:43.212 "data_offset": 0, 00:23:43.212 "data_size": 65536 00:23:43.212 } 00:23:43.212 ] 00:23:43.212 }' 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.212 06:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.172 06:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.431 06:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.431 "name": "raid_bdev1", 00:23:44.431 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:44.431 "strip_size_kb": 64, 00:23:44.431 "state": "online", 00:23:44.431 "raid_level": "raid5f", 00:23:44.431 "superblock": false, 00:23:44.431 "num_base_bdevs": 4, 00:23:44.431 "num_base_bdevs_discovered": 4, 00:23:44.431 "num_base_bdevs_operational": 4, 00:23:44.431 "process": { 00:23:44.431 "type": "rebuild", 00:23:44.431 "target": "spare", 00:23:44.431 "progress": { 00:23:44.431 "blocks": 176640, 00:23:44.431 "percent": 89 00:23:44.431 } 00:23:44.431 }, 00:23:44.431 "base_bdevs_list": [ 00:23:44.431 { 00:23:44.431 "name": "spare", 00:23:44.431 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:44.431 "is_configured": true, 00:23:44.431 "data_offset": 0, 00:23:44.431 "data_size": 65536 00:23:44.431 }, 00:23:44.431 { 00:23:44.431 "name": "BaseBdev2", 00:23:44.431 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:44.431 "is_configured": true, 00:23:44.431 "data_offset": 0, 00:23:44.431 "data_size": 65536 00:23:44.431 }, 00:23:44.431 { 00:23:44.431 "name": "BaseBdev3", 00:23:44.431 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:44.431 "is_configured": true, 00:23:44.431 "data_offset": 0, 00:23:44.431 "data_size": 65536 00:23:44.431 }, 00:23:44.431 { 00:23:44.431 "name": "BaseBdev4", 00:23:44.431 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:44.431 "is_configured": true, 00:23:44.431 "data_offset": 0, 00:23:44.431 "data_size": 65536 00:23:44.431 } 00:23:44.431 ] 00:23:44.431 }' 00:23:44.431 06:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.431 06:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.431 06:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.431 06:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.431 06:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:45.370 [2024-08-13 06:18:47.133736] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:45.370 [2024-08-13 06:18:47.133840] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:45.370 [2024-08-13 06:18:47.133899] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.630 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.890 "name": "raid_bdev1", 00:23:45.890 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:45.890 "strip_size_kb": 64, 00:23:45.890 "state": "online", 00:23:45.890 "raid_level": "raid5f", 00:23:45.890 "superblock": false, 00:23:45.890 "num_base_bdevs": 4, 00:23:45.890 "num_base_bdevs_discovered": 4, 00:23:45.890 "num_base_bdevs_operational": 4, 00:23:45.890 "base_bdevs_list": [ 00:23:45.890 { 00:23:45.890 "name": "spare", 00:23:45.890 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:45.890 "is_configured": true, 00:23:45.890 "data_offset": 0, 00:23:45.890 "data_size": 65536 00:23:45.890 }, 00:23:45.890 { 00:23:45.890 "name": "BaseBdev2", 00:23:45.890 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:45.890 "is_configured": true, 00:23:45.890 "data_offset": 0, 00:23:45.890 "data_size": 65536 00:23:45.890 }, 00:23:45.890 { 00:23:45.890 "name": "BaseBdev3", 00:23:45.890 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:45.890 "is_configured": true, 00:23:45.890 "data_offset": 0, 00:23:45.890 "data_size": 65536 00:23:45.890 }, 00:23:45.890 { 00:23:45.890 "name": "BaseBdev4", 00:23:45.890 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:45.890 "is_configured": true, 00:23:45.890 "data_offset": 0, 00:23:45.890 "data_size": 65536 00:23:45.890 } 00:23:45.890 ] 00:23:45.890 }' 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.890 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.149 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:46.149 "name": "raid_bdev1", 00:23:46.149 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:46.149 "strip_size_kb": 64, 00:23:46.149 "state": "online", 00:23:46.149 "raid_level": "raid5f", 00:23:46.149 "superblock": false, 00:23:46.149 "num_base_bdevs": 4, 00:23:46.149 "num_base_bdevs_discovered": 4, 00:23:46.149 "num_base_bdevs_operational": 4, 00:23:46.149 "base_bdevs_list": [ 00:23:46.149 { 00:23:46.149 "name": "spare", 00:23:46.149 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:46.149 "is_configured": true, 00:23:46.149 "data_offset": 0, 00:23:46.149 "data_size": 65536 00:23:46.149 }, 00:23:46.149 { 00:23:46.149 "name": "BaseBdev2", 00:23:46.149 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:46.149 "is_configured": true, 00:23:46.149 "data_offset": 0, 00:23:46.149 "data_size": 65536 00:23:46.149 }, 00:23:46.149 { 00:23:46.149 "name": "BaseBdev3", 00:23:46.149 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:46.149 "is_configured": true, 00:23:46.149 "data_offset": 0, 00:23:46.149 "data_size": 65536 00:23:46.149 }, 00:23:46.149 { 00:23:46.149 "name": "BaseBdev4", 00:23:46.149 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:46.149 "is_configured": true, 00:23:46.149 "data_offset": 0, 00:23:46.149 "data_size": 65536 00:23:46.149 } 00:23:46.149 ] 00:23:46.149 }' 00:23:46.149 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:46.149 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:46.149 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.150 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.409 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.409 "name": "raid_bdev1", 00:23:46.409 "uuid": "b6a88325-9334-4abd-821d-9b3d697666e9", 00:23:46.409 "strip_size_kb": 64, 00:23:46.409 "state": "online", 00:23:46.409 "raid_level": "raid5f", 00:23:46.409 "superblock": false, 00:23:46.409 "num_base_bdevs": 4, 00:23:46.409 "num_base_bdevs_discovered": 4, 00:23:46.409 "num_base_bdevs_operational": 4, 00:23:46.409 "base_bdevs_list": [ 00:23:46.409 { 00:23:46.409 "name": "spare", 00:23:46.409 "uuid": "9134be3c-cf17-578b-8f7e-f5f335c06c0a", 00:23:46.409 "is_configured": true, 00:23:46.409 "data_offset": 0, 00:23:46.409 "data_size": 65536 00:23:46.409 }, 00:23:46.409 { 00:23:46.409 "name": "BaseBdev2", 00:23:46.409 "uuid": "b4bf82ea-1a4f-5292-ba56-47f70ae5f33d", 00:23:46.409 "is_configured": true, 00:23:46.409 "data_offset": 0, 00:23:46.409 "data_size": 65536 00:23:46.409 }, 00:23:46.409 { 00:23:46.409 "name": "BaseBdev3", 00:23:46.409 "uuid": "cdd90d96-165a-5233-91f9-3f37a72ba41f", 00:23:46.409 "is_configured": true, 00:23:46.409 "data_offset": 0, 00:23:46.409 "data_size": 65536 00:23:46.409 }, 00:23:46.409 { 00:23:46.409 "name": "BaseBdev4", 00:23:46.409 "uuid": "cabf35b6-7249-5230-a26c-8be12df745f6", 00:23:46.409 "is_configured": true, 00:23:46.409 "data_offset": 0, 00:23:46.409 "data_size": 65536 00:23:46.409 } 00:23:46.409 ] 00:23:46.409 }' 00:23:46.409 06:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.409 06:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.979 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:46.979 [2024-08-13 06:18:48.716345] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.979 [2024-08-13 06:18:48.716375] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.979 [2024-08-13 06:18:48.716459] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.979 [2024-08-13 06:18:48.716546] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.979 [2024-08-13 06:18:48.716557] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:23:46.979 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.979 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:23:47.238 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:47.238 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:47.238 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:47.239 06:18:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:47.498 /dev/nbd0 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:47.498 1+0 records in 00:23:47.498 1+0 records out 00:23:47.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440822 s, 9.3 MB/s 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:47.498 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:47.758 /dev/nbd1 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:47.758 1+0 records in 00:23:47.758 1+0 records out 00:23:47.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609346 s, 6.7 MB/s 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.758 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:48.018 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 103775 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 103775 ']' 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 103775 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:48.278 06:18:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103775 00:23:48.278 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:48.278 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:48.278 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103775' 00:23:48.278 killing process with pid 103775 00:23:48.278 Received shutdown signal, test time was about 60.000000 seconds 00:23:48.278 00:23:48.278 Latency(us) 00:23:48.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.278 =================================================================================================================== 00:23:48.278 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.278 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 103775 00:23:48.278 [2024-08-13 06:18:50.004914] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.278 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 103775 00:23:48.278 [2024-08-13 06:18:50.055210] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:48.539 06:18:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:23:48.539 00:23:48.539 real 0m22.664s 00:23:48.539 user 0m32.551s 00:23:48.539 sys 0m3.163s 00:23:48.539 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:48.539 06:18:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.539 ************************************ 00:23:48.539 END TEST raid5f_rebuild_test 00:23:48.539 ************************************ 00:23:48.799 06:18:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:23:48.799 06:18:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:23:48.799 06:18:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:48.799 06:18:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:48.799 ************************************ 00:23:48.799 START TEST raid5f_rebuild_test_sb 00:23:48.799 ************************************ 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 true false true 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=104317 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 104317 /var/tmp/spdk-raid.sock 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:48.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 104317 ']' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:48.799 06:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.799 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:48.799 Zero copy mechanism will not be used. 00:23:48.799 [2024-08-13 06:18:50.472309] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:23:48.799 [2024-08-13 06:18:50.472465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104317 ] 00:23:49.059 [2024-08-13 06:18:50.619961] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.059 [2024-08-13 06:18:50.664741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.059 [2024-08-13 06:18:50.707355] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.059 [2024-08-13 06:18:50.707483] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.629 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:49.629 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:23:49.629 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:49.629 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:49.889 BaseBdev1_malloc 00:23:49.889 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:49.889 [2024-08-13 06:18:51.663368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:49.889 [2024-08-13 06:18:51.663430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.889 [2024-08-13 06:18:51.663452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:49.889 [2024-08-13 06:18:51.663469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.889 [2024-08-13 06:18:51.665443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.889 [2024-08-13 06:18:51.665487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:49.889 BaseBdev1 00:23:50.149 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:50.149 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:50.149 BaseBdev2_malloc 00:23:50.149 06:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:50.409 [2024-08-13 06:18:52.035348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:50.409 [2024-08-13 06:18:52.035405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.409 [2024-08-13 06:18:52.035423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:50.409 [2024-08-13 06:18:52.035433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.409 [2024-08-13 06:18:52.037355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.409 [2024-08-13 06:18:52.037397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:50.409 BaseBdev2 00:23:50.409 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:50.409 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:50.668 BaseBdev3_malloc 00:23:50.668 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:50.928 [2024-08-13 06:18:52.494499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:50.928 [2024-08-13 06:18:52.494596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.928 [2024-08-13 06:18:52.494620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:50.928 [2024-08-13 06:18:52.494631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.928 [2024-08-13 06:18:52.496617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.928 [2024-08-13 06:18:52.496659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:50.928 BaseBdev3 00:23:50.928 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:50.929 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:50.929 BaseBdev4_malloc 00:23:51.188 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:51.188 [2024-08-13 06:18:52.870330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:51.188 [2024-08-13 06:18:52.870380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.188 [2024-08-13 06:18:52.870396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:51.188 [2024-08-13 06:18:52.870409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.188 [2024-08-13 06:18:52.872365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.188 [2024-08-13 06:18:52.872405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:51.188 BaseBdev4 00:23:51.188 06:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:51.448 spare_malloc 00:23:51.448 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:51.708 spare_delay 00:23:51.708 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:51.708 [2024-08-13 06:18:53.437891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:51.708 [2024-08-13 06:18:53.437980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.708 [2024-08-13 06:18:53.437999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:51.708 [2024-08-13 06:18:53.438009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.708 [2024-08-13 06:18:53.439972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.708 [2024-08-13 06:18:53.440011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:51.708 spare 00:23:51.708 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:51.967 [2024-08-13 06:18:53.633641] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:51.967 [2024-08-13 06:18:53.635381] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:51.967 [2024-08-13 06:18:53.635473] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:51.967 [2024-08-13 06:18:53.635534] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:51.967 [2024-08-13 06:18:53.635732] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:23:51.967 [2024-08-13 06:18:53.635777] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:51.967 [2024-08-13 06:18:53.636025] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:51.967 [2024-08-13 06:18:53.636483] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:23:51.967 [2024-08-13 06:18:53.636532] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:23:51.967 [2024-08-13 06:18:53.636700] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.967 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.227 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.227 "name": "raid_bdev1", 00:23:52.227 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:52.227 "strip_size_kb": 64, 00:23:52.227 "state": "online", 00:23:52.227 "raid_level": "raid5f", 00:23:52.227 "superblock": true, 00:23:52.227 "num_base_bdevs": 4, 00:23:52.227 "num_base_bdevs_discovered": 4, 00:23:52.227 "num_base_bdevs_operational": 4, 00:23:52.227 "base_bdevs_list": [ 00:23:52.227 { 00:23:52.227 "name": "BaseBdev1", 00:23:52.227 "uuid": "c5546007-0e03-5c88-abed-4bf95cdcb79b", 00:23:52.227 "is_configured": true, 00:23:52.227 "data_offset": 2048, 00:23:52.227 "data_size": 63488 00:23:52.227 }, 00:23:52.227 { 00:23:52.227 "name": "BaseBdev2", 00:23:52.227 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:52.227 "is_configured": true, 00:23:52.227 "data_offset": 2048, 00:23:52.227 "data_size": 63488 00:23:52.227 }, 00:23:52.227 { 00:23:52.227 "name": "BaseBdev3", 00:23:52.227 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:52.227 "is_configured": true, 00:23:52.227 "data_offset": 2048, 00:23:52.227 "data_size": 63488 00:23:52.227 }, 00:23:52.227 { 00:23:52.227 "name": "BaseBdev4", 00:23:52.227 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:52.227 "is_configured": true, 00:23:52.227 "data_offset": 2048, 00:23:52.227 "data_size": 63488 00:23:52.227 } 00:23:52.227 ] 00:23:52.227 }' 00:23:52.227 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.227 06:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.797 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:52.797 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:53.056 [2024-08-13 06:18:54.597135] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=190464 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.057 06:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:53.317 [2024-08-13 06:18:55.024206] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:23:53.317 /dev/nbd0 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.317 1+0 records in 00:23:53.317 1+0 records out 00:23:53.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549776 s, 7.5 MB/s 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 192 00:23:53.317 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:53.887 496+0 records in 00:23:53.887 496+0 records out 00:23:53.887 97517568 bytes (98 MB, 93 MiB) copied, 0.547691 s, 178 MB/s 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:53.887 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:54.146 [2024-08-13 06:18:55.860074] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.146 06:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:54.406 [2024-08-13 06:18:56.069319] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.406 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.666 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.666 "name": "raid_bdev1", 00:23:54.666 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:54.666 "strip_size_kb": 64, 00:23:54.666 "state": "online", 00:23:54.666 "raid_level": "raid5f", 00:23:54.666 "superblock": true, 00:23:54.666 "num_base_bdevs": 4, 00:23:54.666 "num_base_bdevs_discovered": 3, 00:23:54.666 "num_base_bdevs_operational": 3, 00:23:54.666 "base_bdevs_list": [ 00:23:54.666 { 00:23:54.666 "name": null, 00:23:54.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.666 "is_configured": false, 00:23:54.666 "data_offset": 2048, 00:23:54.666 "data_size": 63488 00:23:54.666 }, 00:23:54.666 { 00:23:54.666 "name": "BaseBdev2", 00:23:54.666 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:54.666 "is_configured": true, 00:23:54.666 "data_offset": 2048, 00:23:54.666 "data_size": 63488 00:23:54.666 }, 00:23:54.666 { 00:23:54.666 "name": "BaseBdev3", 00:23:54.666 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:54.666 "is_configured": true, 00:23:54.666 "data_offset": 2048, 00:23:54.666 "data_size": 63488 00:23:54.666 }, 00:23:54.666 { 00:23:54.666 "name": "BaseBdev4", 00:23:54.666 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:54.666 "is_configured": true, 00:23:54.666 "data_offset": 2048, 00:23:54.666 "data_size": 63488 00:23:54.666 } 00:23:54.666 ] 00:23:54.666 }' 00:23:54.666 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.666 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.235 06:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:55.235 [2024-08-13 06:18:56.987887] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:55.235 [2024-08-13 06:18:56.991359] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:23:55.235 [2024-08-13 06:18:56.993512] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:55.235 06:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.615 "name": "raid_bdev1", 00:23:56.615 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:56.615 "strip_size_kb": 64, 00:23:56.615 "state": "online", 00:23:56.615 "raid_level": "raid5f", 00:23:56.615 "superblock": true, 00:23:56.615 "num_base_bdevs": 4, 00:23:56.615 "num_base_bdevs_discovered": 4, 00:23:56.615 "num_base_bdevs_operational": 4, 00:23:56.615 "process": { 00:23:56.615 "type": "rebuild", 00:23:56.615 "target": "spare", 00:23:56.615 "progress": { 00:23:56.615 "blocks": 23040, 00:23:56.615 "percent": 12 00:23:56.615 } 00:23:56.615 }, 00:23:56.615 "base_bdevs_list": [ 00:23:56.615 { 00:23:56.615 "name": "spare", 00:23:56.615 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:23:56.615 "is_configured": true, 00:23:56.615 "data_offset": 2048, 00:23:56.615 "data_size": 63488 00:23:56.615 }, 00:23:56.615 { 00:23:56.615 "name": "BaseBdev2", 00:23:56.615 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:56.615 "is_configured": true, 00:23:56.615 "data_offset": 2048, 00:23:56.615 "data_size": 63488 00:23:56.615 }, 00:23:56.615 { 00:23:56.615 "name": "BaseBdev3", 00:23:56.615 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:56.615 "is_configured": true, 00:23:56.615 "data_offset": 2048, 00:23:56.615 "data_size": 63488 00:23:56.615 }, 00:23:56.615 { 00:23:56.615 "name": "BaseBdev4", 00:23:56.615 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:56.615 "is_configured": true, 00:23:56.615 "data_offset": 2048, 00:23:56.615 "data_size": 63488 00:23:56.615 } 00:23:56.615 ] 00:23:56.615 }' 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.615 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:56.875 [2024-08-13 06:18:58.475660] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:56.875 [2024-08-13 06:18:58.500121] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:56.875 [2024-08-13 06:18:58.500222] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.875 [2024-08-13 06:18:58.500255] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:56.875 [2024-08-13 06:18:58.500277] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.875 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.135 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.135 "name": "raid_bdev1", 00:23:57.135 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:57.135 "strip_size_kb": 64, 00:23:57.135 "state": "online", 00:23:57.135 "raid_level": "raid5f", 00:23:57.135 "superblock": true, 00:23:57.135 "num_base_bdevs": 4, 00:23:57.135 "num_base_bdevs_discovered": 3, 00:23:57.135 "num_base_bdevs_operational": 3, 00:23:57.135 "base_bdevs_list": [ 00:23:57.135 { 00:23:57.135 "name": null, 00:23:57.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.135 "is_configured": false, 00:23:57.135 "data_offset": 2048, 00:23:57.135 "data_size": 63488 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "name": "BaseBdev2", 00:23:57.135 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 2048, 00:23:57.135 "data_size": 63488 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "name": "BaseBdev3", 00:23:57.135 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 2048, 00:23:57.135 "data_size": 63488 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "name": "BaseBdev4", 00:23:57.135 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 2048, 00:23:57.135 "data_size": 63488 00:23:57.135 } 00:23:57.135 ] 00:23:57.135 }' 00:23:57.135 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.135 06:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.704 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:57.704 "name": "raid_bdev1", 00:23:57.704 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:57.704 "strip_size_kb": 64, 00:23:57.704 "state": "online", 00:23:57.704 "raid_level": "raid5f", 00:23:57.704 "superblock": true, 00:23:57.704 "num_base_bdevs": 4, 00:23:57.705 "num_base_bdevs_discovered": 3, 00:23:57.705 "num_base_bdevs_operational": 3, 00:23:57.705 "base_bdevs_list": [ 00:23:57.705 { 00:23:57.705 "name": null, 00:23:57.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.705 "is_configured": false, 00:23:57.705 "data_offset": 2048, 00:23:57.705 "data_size": 63488 00:23:57.705 }, 00:23:57.705 { 00:23:57.705 "name": "BaseBdev2", 00:23:57.705 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:57.705 "is_configured": true, 00:23:57.705 "data_offset": 2048, 00:23:57.705 "data_size": 63488 00:23:57.705 }, 00:23:57.705 { 00:23:57.705 "name": "BaseBdev3", 00:23:57.705 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:57.705 "is_configured": true, 00:23:57.705 "data_offset": 2048, 00:23:57.705 "data_size": 63488 00:23:57.705 }, 00:23:57.705 { 00:23:57.705 "name": "BaseBdev4", 00:23:57.705 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:57.705 "is_configured": true, 00:23:57.705 "data_offset": 2048, 00:23:57.705 "data_size": 63488 00:23:57.705 } 00:23:57.705 ] 00:23:57.705 }' 00:23:57.705 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:57.965 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:57.965 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:57.965 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:57.965 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:57.965 [2024-08-13 06:18:59.727368] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:57.965 [2024-08-13 06:18:59.730712] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:23:57.965 [2024-08-13 06:18:59.732783] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:57.965 06:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.344 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.345 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.345 "name": "raid_bdev1", 00:23:59.345 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:59.345 "strip_size_kb": 64, 00:23:59.345 "state": "online", 00:23:59.345 "raid_level": "raid5f", 00:23:59.345 "superblock": true, 00:23:59.345 "num_base_bdevs": 4, 00:23:59.345 "num_base_bdevs_discovered": 4, 00:23:59.345 "num_base_bdevs_operational": 4, 00:23:59.345 "process": { 00:23:59.345 "type": "rebuild", 00:23:59.345 "target": "spare", 00:23:59.345 "progress": { 00:23:59.345 "blocks": 23040, 00:23:59.345 "percent": 12 00:23:59.345 } 00:23:59.345 }, 00:23:59.345 "base_bdevs_list": [ 00:23:59.345 { 00:23:59.345 "name": "spare", 00:23:59.345 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:23:59.345 "is_configured": true, 00:23:59.345 "data_offset": 2048, 00:23:59.345 "data_size": 63488 00:23:59.345 }, 00:23:59.345 { 00:23:59.345 "name": "BaseBdev2", 00:23:59.345 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:59.345 "is_configured": true, 00:23:59.345 "data_offset": 2048, 00:23:59.345 "data_size": 63488 00:23:59.345 }, 00:23:59.345 { 00:23:59.345 "name": "BaseBdev3", 00:23:59.345 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:59.345 "is_configured": true, 00:23:59.345 "data_offset": 2048, 00:23:59.345 "data_size": 63488 00:23:59.345 }, 00:23:59.345 { 00:23:59.345 "name": "BaseBdev4", 00:23:59.345 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:59.345 "is_configured": true, 00:23:59.345 "data_offset": 2048, 00:23:59.345 "data_size": 63488 00:23:59.345 } 00:23:59.345 ] 00:23:59.345 }' 00:23:59.345 06:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:23:59.345 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1093 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.345 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.604 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.604 "name": "raid_bdev1", 00:23:59.604 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:23:59.604 "strip_size_kb": 64, 00:23:59.604 "state": "online", 00:23:59.604 "raid_level": "raid5f", 00:23:59.604 "superblock": true, 00:23:59.604 "num_base_bdevs": 4, 00:23:59.604 "num_base_bdevs_discovered": 4, 00:23:59.604 "num_base_bdevs_operational": 4, 00:23:59.604 "process": { 00:23:59.604 "type": "rebuild", 00:23:59.604 "target": "spare", 00:23:59.604 "progress": { 00:23:59.604 "blocks": 28800, 00:23:59.604 "percent": 15 00:23:59.604 } 00:23:59.604 }, 00:23:59.604 "base_bdevs_list": [ 00:23:59.604 { 00:23:59.604 "name": "spare", 00:23:59.604 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:23:59.604 "is_configured": true, 00:23:59.604 "data_offset": 2048, 00:23:59.604 "data_size": 63488 00:23:59.604 }, 00:23:59.604 { 00:23:59.604 "name": "BaseBdev2", 00:23:59.604 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:23:59.604 "is_configured": true, 00:23:59.604 "data_offset": 2048, 00:23:59.604 "data_size": 63488 00:23:59.604 }, 00:23:59.604 { 00:23:59.604 "name": "BaseBdev3", 00:23:59.604 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:23:59.604 "is_configured": true, 00:23:59.604 "data_offset": 2048, 00:23:59.604 "data_size": 63488 00:23:59.604 }, 00:23:59.604 { 00:23:59.604 "name": "BaseBdev4", 00:23:59.604 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:23:59.604 "is_configured": true, 00:23:59.604 "data_offset": 2048, 00:23:59.604 "data_size": 63488 00:23:59.604 } 00:23:59.604 ] 00:23:59.604 }' 00:23:59.604 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.604 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.604 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.604 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.604 06:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:00.982 "name": "raid_bdev1", 00:24:00.982 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:00.982 "strip_size_kb": 64, 00:24:00.982 "state": "online", 00:24:00.982 "raid_level": "raid5f", 00:24:00.982 "superblock": true, 00:24:00.982 "num_base_bdevs": 4, 00:24:00.982 "num_base_bdevs_discovered": 4, 00:24:00.982 "num_base_bdevs_operational": 4, 00:24:00.982 "process": { 00:24:00.982 "type": "rebuild", 00:24:00.982 "target": "spare", 00:24:00.982 "progress": { 00:24:00.982 "blocks": 53760, 00:24:00.982 "percent": 28 00:24:00.982 } 00:24:00.982 }, 00:24:00.982 "base_bdevs_list": [ 00:24:00.982 { 00:24:00.982 "name": "spare", 00:24:00.982 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:00.982 "is_configured": true, 00:24:00.982 "data_offset": 2048, 00:24:00.982 "data_size": 63488 00:24:00.982 }, 00:24:00.982 { 00:24:00.982 "name": "BaseBdev2", 00:24:00.982 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:00.982 "is_configured": true, 00:24:00.982 "data_offset": 2048, 00:24:00.982 "data_size": 63488 00:24:00.982 }, 00:24:00.982 { 00:24:00.982 "name": "BaseBdev3", 00:24:00.982 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:00.982 "is_configured": true, 00:24:00.982 "data_offset": 2048, 00:24:00.982 "data_size": 63488 00:24:00.982 }, 00:24:00.982 { 00:24:00.982 "name": "BaseBdev4", 00:24:00.982 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:00.982 "is_configured": true, 00:24:00.982 "data_offset": 2048, 00:24:00.982 "data_size": 63488 00:24:00.982 } 00:24:00.982 ] 00:24:00.982 }' 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.982 06:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:01.917 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:01.917 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.917 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:01.917 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:01.918 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:01.918 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:01.918 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.918 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.175 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:02.175 "name": "raid_bdev1", 00:24:02.175 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:02.175 "strip_size_kb": 64, 00:24:02.175 "state": "online", 00:24:02.175 "raid_level": "raid5f", 00:24:02.175 "superblock": true, 00:24:02.175 "num_base_bdevs": 4, 00:24:02.175 "num_base_bdevs_discovered": 4, 00:24:02.175 "num_base_bdevs_operational": 4, 00:24:02.175 "process": { 00:24:02.175 "type": "rebuild", 00:24:02.175 "target": "spare", 00:24:02.175 "progress": { 00:24:02.175 "blocks": 78720, 00:24:02.175 "percent": 41 00:24:02.175 } 00:24:02.175 }, 00:24:02.175 "base_bdevs_list": [ 00:24:02.175 { 00:24:02.175 "name": "spare", 00:24:02.175 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:02.175 "is_configured": true, 00:24:02.175 "data_offset": 2048, 00:24:02.175 "data_size": 63488 00:24:02.175 }, 00:24:02.175 { 00:24:02.175 "name": "BaseBdev2", 00:24:02.175 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:02.175 "is_configured": true, 00:24:02.175 "data_offset": 2048, 00:24:02.175 "data_size": 63488 00:24:02.175 }, 00:24:02.175 { 00:24:02.175 "name": "BaseBdev3", 00:24:02.175 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:02.175 "is_configured": true, 00:24:02.175 "data_offset": 2048, 00:24:02.175 "data_size": 63488 00:24:02.175 }, 00:24:02.175 { 00:24:02.175 "name": "BaseBdev4", 00:24:02.175 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:02.175 "is_configured": true, 00:24:02.175 "data_offset": 2048, 00:24:02.175 "data_size": 63488 00:24:02.175 } 00:24:02.175 ] 00:24:02.175 }' 00:24:02.175 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:02.434 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.434 06:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:02.434 06:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.434 06:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.371 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.631 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.631 "name": "raid_bdev1", 00:24:03.631 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:03.631 "strip_size_kb": 64, 00:24:03.631 "state": "online", 00:24:03.631 "raid_level": "raid5f", 00:24:03.631 "superblock": true, 00:24:03.631 "num_base_bdevs": 4, 00:24:03.631 "num_base_bdevs_discovered": 4, 00:24:03.631 "num_base_bdevs_operational": 4, 00:24:03.631 "process": { 00:24:03.631 "type": "rebuild", 00:24:03.631 "target": "spare", 00:24:03.631 "progress": { 00:24:03.631 "blocks": 103680, 00:24:03.631 "percent": 54 00:24:03.631 } 00:24:03.631 }, 00:24:03.631 "base_bdevs_list": [ 00:24:03.631 { 00:24:03.631 "name": "spare", 00:24:03.631 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:03.631 "is_configured": true, 00:24:03.631 "data_offset": 2048, 00:24:03.631 "data_size": 63488 00:24:03.631 }, 00:24:03.631 { 00:24:03.631 "name": "BaseBdev2", 00:24:03.631 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:03.631 "is_configured": true, 00:24:03.631 "data_offset": 2048, 00:24:03.631 "data_size": 63488 00:24:03.631 }, 00:24:03.631 { 00:24:03.631 "name": "BaseBdev3", 00:24:03.631 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:03.631 "is_configured": true, 00:24:03.631 "data_offset": 2048, 00:24:03.631 "data_size": 63488 00:24:03.631 }, 00:24:03.631 { 00:24:03.631 "name": "BaseBdev4", 00:24:03.631 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:03.631 "is_configured": true, 00:24:03.631 "data_offset": 2048, 00:24:03.631 "data_size": 63488 00:24:03.631 } 00:24:03.631 ] 00:24:03.631 }' 00:24:03.631 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:03.631 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.631 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:03.631 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.631 06:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.569 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.829 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:04.829 "name": "raid_bdev1", 00:24:04.829 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:04.829 "strip_size_kb": 64, 00:24:04.829 "state": "online", 00:24:04.829 "raid_level": "raid5f", 00:24:04.829 "superblock": true, 00:24:04.829 "num_base_bdevs": 4, 00:24:04.829 "num_base_bdevs_discovered": 4, 00:24:04.829 "num_base_bdevs_operational": 4, 00:24:04.829 "process": { 00:24:04.829 "type": "rebuild", 00:24:04.829 "target": "spare", 00:24:04.829 "progress": { 00:24:04.829 "blocks": 128640, 00:24:04.829 "percent": 67 00:24:04.829 } 00:24:04.829 }, 00:24:04.829 "base_bdevs_list": [ 00:24:04.829 { 00:24:04.829 "name": "spare", 00:24:04.829 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:04.829 "is_configured": true, 00:24:04.829 "data_offset": 2048, 00:24:04.829 "data_size": 63488 00:24:04.829 }, 00:24:04.829 { 00:24:04.829 "name": "BaseBdev2", 00:24:04.829 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:04.829 "is_configured": true, 00:24:04.829 "data_offset": 2048, 00:24:04.829 "data_size": 63488 00:24:04.829 }, 00:24:04.829 { 00:24:04.829 "name": "BaseBdev3", 00:24:04.829 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:04.829 "is_configured": true, 00:24:04.829 "data_offset": 2048, 00:24:04.829 "data_size": 63488 00:24:04.829 }, 00:24:04.829 { 00:24:04.829 "name": "BaseBdev4", 00:24:04.829 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:04.829 "is_configured": true, 00:24:04.829 "data_offset": 2048, 00:24:04.829 "data_size": 63488 00:24:04.829 } 00:24:04.829 ] 00:24:04.829 }' 00:24:04.829 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:04.829 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.829 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:05.088 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.088 06:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.028 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.298 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.298 "name": "raid_bdev1", 00:24:06.298 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:06.298 "strip_size_kb": 64, 00:24:06.298 "state": "online", 00:24:06.298 "raid_level": "raid5f", 00:24:06.298 "superblock": true, 00:24:06.298 "num_base_bdevs": 4, 00:24:06.298 "num_base_bdevs_discovered": 4, 00:24:06.298 "num_base_bdevs_operational": 4, 00:24:06.298 "process": { 00:24:06.298 "type": "rebuild", 00:24:06.298 "target": "spare", 00:24:06.298 "progress": { 00:24:06.298 "blocks": 153600, 00:24:06.298 "percent": 80 00:24:06.298 } 00:24:06.298 }, 00:24:06.298 "base_bdevs_list": [ 00:24:06.298 { 00:24:06.298 "name": "spare", 00:24:06.298 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:06.298 "is_configured": true, 00:24:06.298 "data_offset": 2048, 00:24:06.298 "data_size": 63488 00:24:06.298 }, 00:24:06.298 { 00:24:06.298 "name": "BaseBdev2", 00:24:06.298 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:06.298 "is_configured": true, 00:24:06.298 "data_offset": 2048, 00:24:06.298 "data_size": 63488 00:24:06.298 }, 00:24:06.298 { 00:24:06.298 "name": "BaseBdev3", 00:24:06.298 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:06.298 "is_configured": true, 00:24:06.298 "data_offset": 2048, 00:24:06.298 "data_size": 63488 00:24:06.298 }, 00:24:06.298 { 00:24:06.298 "name": "BaseBdev4", 00:24:06.298 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:06.298 "is_configured": true, 00:24:06.298 "data_offset": 2048, 00:24:06.298 "data_size": 63488 00:24:06.298 } 00:24:06.298 ] 00:24:06.298 }' 00:24:06.298 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.298 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.298 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.298 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.298 06:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.269 06:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.528 06:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:07.528 "name": "raid_bdev1", 00:24:07.528 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:07.528 "strip_size_kb": 64, 00:24:07.529 "state": "online", 00:24:07.529 "raid_level": "raid5f", 00:24:07.529 "superblock": true, 00:24:07.529 "num_base_bdevs": 4, 00:24:07.529 "num_base_bdevs_discovered": 4, 00:24:07.529 "num_base_bdevs_operational": 4, 00:24:07.529 "process": { 00:24:07.529 "type": "rebuild", 00:24:07.529 "target": "spare", 00:24:07.529 "progress": { 00:24:07.529 "blocks": 178560, 00:24:07.529 "percent": 93 00:24:07.529 } 00:24:07.529 }, 00:24:07.529 "base_bdevs_list": [ 00:24:07.529 { 00:24:07.529 "name": "spare", 00:24:07.529 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:07.529 "is_configured": true, 00:24:07.529 "data_offset": 2048, 00:24:07.529 "data_size": 63488 00:24:07.529 }, 00:24:07.529 { 00:24:07.529 "name": "BaseBdev2", 00:24:07.529 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:07.529 "is_configured": true, 00:24:07.529 "data_offset": 2048, 00:24:07.529 "data_size": 63488 00:24:07.529 }, 00:24:07.529 { 00:24:07.529 "name": "BaseBdev3", 00:24:07.529 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:07.529 "is_configured": true, 00:24:07.529 "data_offset": 2048, 00:24:07.529 "data_size": 63488 00:24:07.529 }, 00:24:07.529 { 00:24:07.529 "name": "BaseBdev4", 00:24:07.529 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:07.529 "is_configured": true, 00:24:07.529 "data_offset": 2048, 00:24:07.529 "data_size": 63488 00:24:07.529 } 00:24:07.529 ] 00:24:07.529 }' 00:24:07.529 06:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:07.529 06:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.529 06:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:07.529 06:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.529 06:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:08.097 [2024-08-13 06:19:09.772643] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:08.097 [2024-08-13 06:19:09.772704] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:08.097 [2024-08-13 06:19:09.772816] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:08.666 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.667 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.667 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:08.667 "name": "raid_bdev1", 00:24:08.667 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:08.667 "strip_size_kb": 64, 00:24:08.667 "state": "online", 00:24:08.667 "raid_level": "raid5f", 00:24:08.667 "superblock": true, 00:24:08.667 "num_base_bdevs": 4, 00:24:08.667 "num_base_bdevs_discovered": 4, 00:24:08.667 "num_base_bdevs_operational": 4, 00:24:08.667 "base_bdevs_list": [ 00:24:08.667 { 00:24:08.667 "name": "spare", 00:24:08.667 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:08.667 "is_configured": true, 00:24:08.667 "data_offset": 2048, 00:24:08.667 "data_size": 63488 00:24:08.667 }, 00:24:08.667 { 00:24:08.667 "name": "BaseBdev2", 00:24:08.667 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:08.667 "is_configured": true, 00:24:08.667 "data_offset": 2048, 00:24:08.667 "data_size": 63488 00:24:08.667 }, 00:24:08.667 { 00:24:08.667 "name": "BaseBdev3", 00:24:08.667 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:08.667 "is_configured": true, 00:24:08.667 "data_offset": 2048, 00:24:08.667 "data_size": 63488 00:24:08.667 }, 00:24:08.667 { 00:24:08.667 "name": "BaseBdev4", 00:24:08.667 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:08.667 "is_configured": true, 00:24:08.667 "data_offset": 2048, 00:24:08.667 "data_size": 63488 00:24:08.667 } 00:24:08.667 ] 00:24:08.667 }' 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.926 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.186 "name": "raid_bdev1", 00:24:09.186 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:09.186 "strip_size_kb": 64, 00:24:09.186 "state": "online", 00:24:09.186 "raid_level": "raid5f", 00:24:09.186 "superblock": true, 00:24:09.186 "num_base_bdevs": 4, 00:24:09.186 "num_base_bdevs_discovered": 4, 00:24:09.186 "num_base_bdevs_operational": 4, 00:24:09.186 "base_bdevs_list": [ 00:24:09.186 { 00:24:09.186 "name": "spare", 00:24:09.186 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:09.186 "is_configured": true, 00:24:09.186 "data_offset": 2048, 00:24:09.186 "data_size": 63488 00:24:09.186 }, 00:24:09.186 { 00:24:09.186 "name": "BaseBdev2", 00:24:09.186 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:09.186 "is_configured": true, 00:24:09.186 "data_offset": 2048, 00:24:09.186 "data_size": 63488 00:24:09.186 }, 00:24:09.186 { 00:24:09.186 "name": "BaseBdev3", 00:24:09.186 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:09.186 "is_configured": true, 00:24:09.186 "data_offset": 2048, 00:24:09.186 "data_size": 63488 00:24:09.186 }, 00:24:09.186 { 00:24:09.186 "name": "BaseBdev4", 00:24:09.186 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:09.186 "is_configured": true, 00:24:09.186 "data_offset": 2048, 00:24:09.186 "data_size": 63488 00:24:09.186 } 00:24:09.186 ] 00:24:09.186 }' 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.186 06:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.445 06:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.445 "name": "raid_bdev1", 00:24:09.446 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:09.446 "strip_size_kb": 64, 00:24:09.446 "state": "online", 00:24:09.446 "raid_level": "raid5f", 00:24:09.446 "superblock": true, 00:24:09.446 "num_base_bdevs": 4, 00:24:09.446 "num_base_bdevs_discovered": 4, 00:24:09.446 "num_base_bdevs_operational": 4, 00:24:09.446 "base_bdevs_list": [ 00:24:09.446 { 00:24:09.446 "name": "spare", 00:24:09.446 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:09.446 "is_configured": true, 00:24:09.446 "data_offset": 2048, 00:24:09.446 "data_size": 63488 00:24:09.446 }, 00:24:09.446 { 00:24:09.446 "name": "BaseBdev2", 00:24:09.446 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:09.446 "is_configured": true, 00:24:09.446 "data_offset": 2048, 00:24:09.446 "data_size": 63488 00:24:09.446 }, 00:24:09.446 { 00:24:09.446 "name": "BaseBdev3", 00:24:09.446 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:09.446 "is_configured": true, 00:24:09.446 "data_offset": 2048, 00:24:09.446 "data_size": 63488 00:24:09.446 }, 00:24:09.446 { 00:24:09.446 "name": "BaseBdev4", 00:24:09.446 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:09.446 "is_configured": true, 00:24:09.446 "data_offset": 2048, 00:24:09.446 "data_size": 63488 00:24:09.446 } 00:24:09.446 ] 00:24:09.446 }' 00:24:09.446 06:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.446 06:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.014 06:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:10.014 [2024-08-13 06:19:11.786376] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:10.014 [2024-08-13 06:19:11.786466] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:10.014 [2024-08-13 06:19:11.786613] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.014 [2024-08-13 06:19:11.786751] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.014 [2024-08-13 06:19:11.786797] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:24:10.273 06:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.273 06:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.273 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:10.532 /dev/nbd0 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.532 1+0 records in 00:24:10.532 1+0 records out 00:24:10.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059055 s, 6.9 MB/s 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.532 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:10.791 /dev/nbd1 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.791 1+0 records in 00:24:10.791 1+0 records out 00:24:10.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347853 s, 11.8 MB/s 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.791 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:11.050 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.051 06:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:24:11.310 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:11.570 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:11.829 [2024-08-13 06:19:13.416309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:11.829 [2024-08-13 06:19:13.416376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.829 [2024-08-13 06:19:13.416400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:11.829 [2024-08-13 06:19:13.416409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.829 [2024-08-13 06:19:13.418430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.829 [2024-08-13 06:19:13.418532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:11.829 [2024-08-13 06:19:13.418622] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:11.829 [2024-08-13 06:19:13.418659] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:11.829 [2024-08-13 06:19:13.418784] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:11.829 [2024-08-13 06:19:13.418875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:11.829 [2024-08-13 06:19:13.418937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:11.829 spare 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.829 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.829 [2024-08-13 06:19:13.518822] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:24:11.829 [2024-08-13 06:19:13.518895] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:11.829 [2024-08-13 06:19:13.519182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:24:11.829 [2024-08-13 06:19:13.519664] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:24:11.829 [2024-08-13 06:19:13.519719] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:24:11.830 [2024-08-13 06:19:13.519871] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.090 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.090 "name": "raid_bdev1", 00:24:12.090 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:12.090 "strip_size_kb": 64, 00:24:12.090 "state": "online", 00:24:12.090 "raid_level": "raid5f", 00:24:12.090 "superblock": true, 00:24:12.090 "num_base_bdevs": 4, 00:24:12.090 "num_base_bdevs_discovered": 4, 00:24:12.090 "num_base_bdevs_operational": 4, 00:24:12.090 "base_bdevs_list": [ 00:24:12.090 { 00:24:12.090 "name": "spare", 00:24:12.090 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:12.090 "is_configured": true, 00:24:12.090 "data_offset": 2048, 00:24:12.090 "data_size": 63488 00:24:12.090 }, 00:24:12.090 { 00:24:12.090 "name": "BaseBdev2", 00:24:12.090 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:12.090 "is_configured": true, 00:24:12.090 "data_offset": 2048, 00:24:12.090 "data_size": 63488 00:24:12.090 }, 00:24:12.090 { 00:24:12.090 "name": "BaseBdev3", 00:24:12.090 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:12.090 "is_configured": true, 00:24:12.090 "data_offset": 2048, 00:24:12.090 "data_size": 63488 00:24:12.090 }, 00:24:12.090 { 00:24:12.090 "name": "BaseBdev4", 00:24:12.090 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:12.090 "is_configured": true, 00:24:12.090 "data_offset": 2048, 00:24:12.090 "data_size": 63488 00:24:12.090 } 00:24:12.090 ] 00:24:12.090 }' 00:24:12.090 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.090 06:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:12.658 "name": "raid_bdev1", 00:24:12.658 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:12.658 "strip_size_kb": 64, 00:24:12.658 "state": "online", 00:24:12.658 "raid_level": "raid5f", 00:24:12.658 "superblock": true, 00:24:12.658 "num_base_bdevs": 4, 00:24:12.658 "num_base_bdevs_discovered": 4, 00:24:12.658 "num_base_bdevs_operational": 4, 00:24:12.658 "base_bdevs_list": [ 00:24:12.658 { 00:24:12.658 "name": "spare", 00:24:12.658 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:12.658 "is_configured": true, 00:24:12.658 "data_offset": 2048, 00:24:12.658 "data_size": 63488 00:24:12.658 }, 00:24:12.658 { 00:24:12.658 "name": "BaseBdev2", 00:24:12.658 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:12.658 "is_configured": true, 00:24:12.658 "data_offset": 2048, 00:24:12.658 "data_size": 63488 00:24:12.658 }, 00:24:12.658 { 00:24:12.658 "name": "BaseBdev3", 00:24:12.658 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:12.658 "is_configured": true, 00:24:12.658 "data_offset": 2048, 00:24:12.658 "data_size": 63488 00:24:12.658 }, 00:24:12.658 { 00:24:12.658 "name": "BaseBdev4", 00:24:12.658 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:12.658 "is_configured": true, 00:24:12.658 "data_offset": 2048, 00:24:12.658 "data_size": 63488 00:24:12.658 } 00:24:12.658 ] 00:24:12.658 }' 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:12.658 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:12.916 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:12.916 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.917 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:13.176 [2024-08-13 06:19:14.887149] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.176 06:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.435 06:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.435 "name": "raid_bdev1", 00:24:13.435 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:13.435 "strip_size_kb": 64, 00:24:13.435 "state": "online", 00:24:13.435 "raid_level": "raid5f", 00:24:13.435 "superblock": true, 00:24:13.435 "num_base_bdevs": 4, 00:24:13.435 "num_base_bdevs_discovered": 3, 00:24:13.435 "num_base_bdevs_operational": 3, 00:24:13.435 "base_bdevs_list": [ 00:24:13.435 { 00:24:13.435 "name": null, 00:24:13.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.435 "is_configured": false, 00:24:13.435 "data_offset": 2048, 00:24:13.435 "data_size": 63488 00:24:13.435 }, 00:24:13.435 { 00:24:13.435 "name": "BaseBdev2", 00:24:13.435 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:13.435 "is_configured": true, 00:24:13.435 "data_offset": 2048, 00:24:13.435 "data_size": 63488 00:24:13.435 }, 00:24:13.435 { 00:24:13.435 "name": "BaseBdev3", 00:24:13.435 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:13.435 "is_configured": true, 00:24:13.435 "data_offset": 2048, 00:24:13.435 "data_size": 63488 00:24:13.435 }, 00:24:13.435 { 00:24:13.435 "name": "BaseBdev4", 00:24:13.435 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:13.435 "is_configured": true, 00:24:13.435 "data_offset": 2048, 00:24:13.435 "data_size": 63488 00:24:13.435 } 00:24:13.435 ] 00:24:13.435 }' 00:24:13.435 06:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.435 06:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.003 06:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:14.262 [2024-08-13 06:19:15.813600] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.262 [2024-08-13 06:19:15.813840] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:14.262 [2024-08-13 06:19:15.813915] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:14.262 [2024-08-13 06:19:15.813967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.263 [2024-08-13 06:19:15.817192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:24:14.263 [2024-08-13 06:19:15.819317] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:14.263 06:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.200 06:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.459 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.459 "name": "raid_bdev1", 00:24:15.459 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:15.459 "strip_size_kb": 64, 00:24:15.459 "state": "online", 00:24:15.459 "raid_level": "raid5f", 00:24:15.459 "superblock": true, 00:24:15.459 "num_base_bdevs": 4, 00:24:15.459 "num_base_bdevs_discovered": 4, 00:24:15.459 "num_base_bdevs_operational": 4, 00:24:15.459 "process": { 00:24:15.459 "type": "rebuild", 00:24:15.459 "target": "spare", 00:24:15.459 "progress": { 00:24:15.459 "blocks": 23040, 00:24:15.459 "percent": 12 00:24:15.459 } 00:24:15.459 }, 00:24:15.459 "base_bdevs_list": [ 00:24:15.459 { 00:24:15.459 "name": "spare", 00:24:15.459 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:15.459 "is_configured": true, 00:24:15.459 "data_offset": 2048, 00:24:15.459 "data_size": 63488 00:24:15.459 }, 00:24:15.459 { 00:24:15.459 "name": "BaseBdev2", 00:24:15.459 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:15.459 "is_configured": true, 00:24:15.459 "data_offset": 2048, 00:24:15.459 "data_size": 63488 00:24:15.459 }, 00:24:15.459 { 00:24:15.459 "name": "BaseBdev3", 00:24:15.459 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:15.459 "is_configured": true, 00:24:15.459 "data_offset": 2048, 00:24:15.459 "data_size": 63488 00:24:15.459 }, 00:24:15.459 { 00:24:15.459 "name": "BaseBdev4", 00:24:15.459 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:15.459 "is_configured": true, 00:24:15.459 "data_offset": 2048, 00:24:15.459 "data_size": 63488 00:24:15.459 } 00:24:15.459 ] 00:24:15.459 }' 00:24:15.459 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:15.459 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.459 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:15.459 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.459 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:15.718 [2024-08-13 06:19:17.345511] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.718 [2024-08-13 06:19:17.425919] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:15.718 [2024-08-13 06:19:17.425980] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.718 [2024-08-13 06:19:17.425996] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.718 [2024-08-13 06:19:17.426015] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.718 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.978 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.978 "name": "raid_bdev1", 00:24:15.978 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:15.978 "strip_size_kb": 64, 00:24:15.978 "state": "online", 00:24:15.978 "raid_level": "raid5f", 00:24:15.978 "superblock": true, 00:24:15.978 "num_base_bdevs": 4, 00:24:15.978 "num_base_bdevs_discovered": 3, 00:24:15.978 "num_base_bdevs_operational": 3, 00:24:15.978 "base_bdevs_list": [ 00:24:15.978 { 00:24:15.978 "name": null, 00:24:15.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.978 "is_configured": false, 00:24:15.978 "data_offset": 2048, 00:24:15.978 "data_size": 63488 00:24:15.978 }, 00:24:15.978 { 00:24:15.978 "name": "BaseBdev2", 00:24:15.978 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:15.978 "is_configured": true, 00:24:15.978 "data_offset": 2048, 00:24:15.978 "data_size": 63488 00:24:15.978 }, 00:24:15.978 { 00:24:15.978 "name": "BaseBdev3", 00:24:15.978 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:15.978 "is_configured": true, 00:24:15.978 "data_offset": 2048, 00:24:15.978 "data_size": 63488 00:24:15.978 }, 00:24:15.978 { 00:24:15.978 "name": "BaseBdev4", 00:24:15.978 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:15.978 "is_configured": true, 00:24:15.978 "data_offset": 2048, 00:24:15.978 "data_size": 63488 00:24:15.978 } 00:24:15.978 ] 00:24:15.978 }' 00:24:15.978 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.978 06:19:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.546 06:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:16.805 [2024-08-13 06:19:18.373346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:16.805 [2024-08-13 06:19:18.373478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.805 [2024-08-13 06:19:18.373504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:16.805 [2024-08-13 06:19:18.373515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.805 [2024-08-13 06:19:18.373935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.805 [2024-08-13 06:19:18.373956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:16.805 [2024-08-13 06:19:18.374040] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:16.805 [2024-08-13 06:19:18.374076] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:16.805 [2024-08-13 06:19:18.374088] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:16.805 [2024-08-13 06:19:18.374113] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:16.805 [2024-08-13 06:19:18.377357] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:24:16.805 spare 00:24:16.805 [2024-08-13 06:19:18.379430] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:16.805 06:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:24:17.741 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.741 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.741 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:17.741 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:17.741 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.742 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.742 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.001 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:18.001 "name": "raid_bdev1", 00:24:18.001 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:18.001 "strip_size_kb": 64, 00:24:18.001 "state": "online", 00:24:18.001 "raid_level": "raid5f", 00:24:18.001 "superblock": true, 00:24:18.001 "num_base_bdevs": 4, 00:24:18.001 "num_base_bdevs_discovered": 4, 00:24:18.001 "num_base_bdevs_operational": 4, 00:24:18.001 "process": { 00:24:18.001 "type": "rebuild", 00:24:18.001 "target": "spare", 00:24:18.001 "progress": { 00:24:18.001 "blocks": 23040, 00:24:18.001 "percent": 12 00:24:18.001 } 00:24:18.001 }, 00:24:18.001 "base_bdevs_list": [ 00:24:18.001 { 00:24:18.001 "name": "spare", 00:24:18.001 "uuid": "e659ea96-8877-513c-ab83-f835086113b5", 00:24:18.001 "is_configured": true, 00:24:18.001 "data_offset": 2048, 00:24:18.001 "data_size": 63488 00:24:18.001 }, 00:24:18.001 { 00:24:18.001 "name": "BaseBdev2", 00:24:18.001 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:18.001 "is_configured": true, 00:24:18.001 "data_offset": 2048, 00:24:18.001 "data_size": 63488 00:24:18.001 }, 00:24:18.001 { 00:24:18.001 "name": "BaseBdev3", 00:24:18.001 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:18.001 "is_configured": true, 00:24:18.001 "data_offset": 2048, 00:24:18.001 "data_size": 63488 00:24:18.001 }, 00:24:18.001 { 00:24:18.001 "name": "BaseBdev4", 00:24:18.001 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:18.001 "is_configured": true, 00:24:18.001 "data_offset": 2048, 00:24:18.001 "data_size": 63488 00:24:18.001 } 00:24:18.001 ] 00:24:18.001 }' 00:24:18.001 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:18.001 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.001 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:18.001 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.001 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:18.261 [2024-08-13 06:19:19.879515] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.261 [2024-08-13 06:19:19.885607] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:18.261 [2024-08-13 06:19:19.885705] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.261 [2024-08-13 06:19:19.885742] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.261 [2024-08-13 06:19:19.885762] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.261 06:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.521 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.521 "name": "raid_bdev1", 00:24:18.521 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:18.521 "strip_size_kb": 64, 00:24:18.521 "state": "online", 00:24:18.521 "raid_level": "raid5f", 00:24:18.521 "superblock": true, 00:24:18.521 "num_base_bdevs": 4, 00:24:18.521 "num_base_bdevs_discovered": 3, 00:24:18.521 "num_base_bdevs_operational": 3, 00:24:18.521 "base_bdevs_list": [ 00:24:18.521 { 00:24:18.521 "name": null, 00:24:18.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.521 "is_configured": false, 00:24:18.521 "data_offset": 2048, 00:24:18.521 "data_size": 63488 00:24:18.521 }, 00:24:18.521 { 00:24:18.521 "name": "BaseBdev2", 00:24:18.521 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:18.521 "is_configured": true, 00:24:18.521 "data_offset": 2048, 00:24:18.521 "data_size": 63488 00:24:18.521 }, 00:24:18.521 { 00:24:18.521 "name": "BaseBdev3", 00:24:18.521 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:18.521 "is_configured": true, 00:24:18.521 "data_offset": 2048, 00:24:18.521 "data_size": 63488 00:24:18.521 }, 00:24:18.521 { 00:24:18.521 "name": "BaseBdev4", 00:24:18.521 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:18.521 "is_configured": true, 00:24:18.521 "data_offset": 2048, 00:24:18.521 "data_size": 63488 00:24:18.521 } 00:24:18.521 ] 00:24:18.521 }' 00:24:18.521 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.521 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:19.090 "name": "raid_bdev1", 00:24:19.090 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:19.090 "strip_size_kb": 64, 00:24:19.090 "state": "online", 00:24:19.090 "raid_level": "raid5f", 00:24:19.090 "superblock": true, 00:24:19.090 "num_base_bdevs": 4, 00:24:19.090 "num_base_bdevs_discovered": 3, 00:24:19.090 "num_base_bdevs_operational": 3, 00:24:19.090 "base_bdevs_list": [ 00:24:19.090 { 00:24:19.090 "name": null, 00:24:19.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.090 "is_configured": false, 00:24:19.090 "data_offset": 2048, 00:24:19.090 "data_size": 63488 00:24:19.090 }, 00:24:19.090 { 00:24:19.090 "name": "BaseBdev2", 00:24:19.090 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:19.090 "is_configured": true, 00:24:19.090 "data_offset": 2048, 00:24:19.090 "data_size": 63488 00:24:19.090 }, 00:24:19.090 { 00:24:19.090 "name": "BaseBdev3", 00:24:19.090 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:19.090 "is_configured": true, 00:24:19.090 "data_offset": 2048, 00:24:19.090 "data_size": 63488 00:24:19.090 }, 00:24:19.090 { 00:24:19.090 "name": "BaseBdev4", 00:24:19.090 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:19.090 "is_configured": true, 00:24:19.090 "data_offset": 2048, 00:24:19.090 "data_size": 63488 00:24:19.090 } 00:24:19.090 ] 00:24:19.090 }' 00:24:19.090 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:19.349 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:19.349 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:19.349 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:19.349 06:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:19.608 06:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:19.608 [2024-08-13 06:19:21.316197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:19.608 [2024-08-13 06:19:21.316252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.608 [2024-08-13 06:19:21.316272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:19.608 [2024-08-13 06:19:21.316280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.608 [2024-08-13 06:19:21.316674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.608 [2024-08-13 06:19:21.316690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:19.608 [2024-08-13 06:19:21.316758] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:19.608 [2024-08-13 06:19:21.316770] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:19.608 [2024-08-13 06:19:21.316789] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:19.608 BaseBdev1 00:24:19.608 06:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.547 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.806 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.806 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.806 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.806 "name": "raid_bdev1", 00:24:20.806 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:20.806 "strip_size_kb": 64, 00:24:20.806 "state": "online", 00:24:20.806 "raid_level": "raid5f", 00:24:20.806 "superblock": true, 00:24:20.806 "num_base_bdevs": 4, 00:24:20.806 "num_base_bdevs_discovered": 3, 00:24:20.806 "num_base_bdevs_operational": 3, 00:24:20.806 "base_bdevs_list": [ 00:24:20.806 { 00:24:20.806 "name": null, 00:24:20.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.806 "is_configured": false, 00:24:20.806 "data_offset": 2048, 00:24:20.806 "data_size": 63488 00:24:20.806 }, 00:24:20.806 { 00:24:20.806 "name": "BaseBdev2", 00:24:20.806 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:20.806 "is_configured": true, 00:24:20.806 "data_offset": 2048, 00:24:20.806 "data_size": 63488 00:24:20.806 }, 00:24:20.806 { 00:24:20.806 "name": "BaseBdev3", 00:24:20.806 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:20.806 "is_configured": true, 00:24:20.806 "data_offset": 2048, 00:24:20.806 "data_size": 63488 00:24:20.806 }, 00:24:20.806 { 00:24:20.806 "name": "BaseBdev4", 00:24:20.806 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:20.806 "is_configured": true, 00:24:20.806 "data_offset": 2048, 00:24:20.806 "data_size": 63488 00:24:20.806 } 00:24:20.806 ] 00:24:20.806 }' 00:24:20.806 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.806 06:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.375 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:21.634 "name": "raid_bdev1", 00:24:21.634 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:21.634 "strip_size_kb": 64, 00:24:21.634 "state": "online", 00:24:21.634 "raid_level": "raid5f", 00:24:21.634 "superblock": true, 00:24:21.634 "num_base_bdevs": 4, 00:24:21.634 "num_base_bdevs_discovered": 3, 00:24:21.634 "num_base_bdevs_operational": 3, 00:24:21.634 "base_bdevs_list": [ 00:24:21.634 { 00:24:21.634 "name": null, 00:24:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.634 "is_configured": false, 00:24:21.634 "data_offset": 2048, 00:24:21.634 "data_size": 63488 00:24:21.634 }, 00:24:21.634 { 00:24:21.634 "name": "BaseBdev2", 00:24:21.634 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:21.634 "is_configured": true, 00:24:21.634 "data_offset": 2048, 00:24:21.634 "data_size": 63488 00:24:21.634 }, 00:24:21.634 { 00:24:21.634 "name": "BaseBdev3", 00:24:21.634 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:21.634 "is_configured": true, 00:24:21.634 "data_offset": 2048, 00:24:21.634 "data_size": 63488 00:24:21.634 }, 00:24:21.634 { 00:24:21.634 "name": "BaseBdev4", 00:24:21.634 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:21.634 "is_configured": true, 00:24:21.634 "data_offset": 2048, 00:24:21.634 "data_size": 63488 00:24:21.634 } 00:24:21.634 ] 00:24:21.634 }' 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@646 -- # local es=0 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:21.634 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:21.893 [2024-08-13 06:19:23.500574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.893 [2024-08-13 06:19:23.500729] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:21.893 [2024-08-13 06:19:23.500740] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:21.893 request: 00:24:21.893 { 00:24:21.893 "base_bdev": "BaseBdev1", 00:24:21.893 "raid_bdev": "raid_bdev1", 00:24:21.893 "method": "bdev_raid_add_base_bdev", 00:24:21.893 "req_id": 1 00:24:21.893 } 00:24:21.893 Got JSON-RPC error response 00:24:21.893 response: 00:24:21.893 { 00:24:21.893 "code": -22, 00:24:21.893 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:21.893 } 00:24:21.893 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # es=1 00:24:21.893 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:21.893 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:21.893 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:21.893 06:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.830 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.088 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.088 "name": "raid_bdev1", 00:24:23.089 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:23.089 "strip_size_kb": 64, 00:24:23.089 "state": "online", 00:24:23.089 "raid_level": "raid5f", 00:24:23.089 "superblock": true, 00:24:23.089 "num_base_bdevs": 4, 00:24:23.089 "num_base_bdevs_discovered": 3, 00:24:23.089 "num_base_bdevs_operational": 3, 00:24:23.089 "base_bdevs_list": [ 00:24:23.089 { 00:24:23.089 "name": null, 00:24:23.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.089 "is_configured": false, 00:24:23.089 "data_offset": 2048, 00:24:23.089 "data_size": 63488 00:24:23.089 }, 00:24:23.089 { 00:24:23.089 "name": "BaseBdev2", 00:24:23.089 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:23.089 "is_configured": true, 00:24:23.089 "data_offset": 2048, 00:24:23.089 "data_size": 63488 00:24:23.089 }, 00:24:23.089 { 00:24:23.089 "name": "BaseBdev3", 00:24:23.089 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:23.089 "is_configured": true, 00:24:23.089 "data_offset": 2048, 00:24:23.089 "data_size": 63488 00:24:23.089 }, 00:24:23.089 { 00:24:23.089 "name": "BaseBdev4", 00:24:23.089 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:23.089 "is_configured": true, 00:24:23.089 "data_offset": 2048, 00:24:23.089 "data_size": 63488 00:24:23.089 } 00:24:23.089 ] 00:24:23.089 }' 00:24:23.089 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.089 06:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.656 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:23.915 "name": "raid_bdev1", 00:24:23.915 "uuid": "d36a061d-36d4-4eaf-8cc0-dd76581f2663", 00:24:23.915 "strip_size_kb": 64, 00:24:23.915 "state": "online", 00:24:23.915 "raid_level": "raid5f", 00:24:23.915 "superblock": true, 00:24:23.915 "num_base_bdevs": 4, 00:24:23.915 "num_base_bdevs_discovered": 3, 00:24:23.915 "num_base_bdevs_operational": 3, 00:24:23.915 "base_bdevs_list": [ 00:24:23.915 { 00:24:23.915 "name": null, 00:24:23.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.915 "is_configured": false, 00:24:23.915 "data_offset": 2048, 00:24:23.915 "data_size": 63488 00:24:23.915 }, 00:24:23.915 { 00:24:23.915 "name": "BaseBdev2", 00:24:23.915 "uuid": "320c0b44-061b-5b0b-8db4-2b0e55c61185", 00:24:23.915 "is_configured": true, 00:24:23.915 "data_offset": 2048, 00:24:23.915 "data_size": 63488 00:24:23.915 }, 00:24:23.915 { 00:24:23.915 "name": "BaseBdev3", 00:24:23.915 "uuid": "4eaba8da-8fa9-57f5-a5bb-32f91a0c448d", 00:24:23.915 "is_configured": true, 00:24:23.915 "data_offset": 2048, 00:24:23.915 "data_size": 63488 00:24:23.915 }, 00:24:23.915 { 00:24:23.915 "name": "BaseBdev4", 00:24:23.915 "uuid": "2f07c16c-2ae8-5c0f-a41b-d4b9601351f6", 00:24:23.915 "is_configured": true, 00:24:23.915 "data_offset": 2048, 00:24:23.915 "data_size": 63488 00:24:23.915 } 00:24:23.915 ] 00:24:23.915 }' 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 104317 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 104317 ']' 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 104317 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104317 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:23.915 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:23.915 killing process with pid 104317 00:24:23.915 Received shutdown signal, test time was about 60.000000 seconds 00:24:23.915 00:24:23.915 Latency(us) 00:24:23.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.915 =================================================================================================================== 00:24:23.915 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.916 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104317' 00:24:23.916 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 104317 00:24:23.916 [2024-08-13 06:19:25.589414] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:23.916 [2024-08-13 06:19:25.589537] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.916 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 104317 00:24:23.916 [2024-08-13 06:19:25.589613] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.916 [2024-08-13 06:19:25.589623] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:24:23.916 [2024-08-13 06:19:25.639499] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.175 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:24:24.175 ************************************ 00:24:24.175 END TEST raid5f_rebuild_test_sb 00:24:24.175 ************************************ 00:24:24.175 00:24:24.175 real 0m35.498s 00:24:24.175 user 0m52.913s 00:24:24.175 sys 0m4.852s 00:24:24.175 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:24.175 06:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.175 06:19:25 bdev_raid -- bdev/bdev_raid.sh@974 -- # base_blocklen=4096 00:24:24.175 06:19:25 bdev_raid -- bdev/bdev_raid.sh@976 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:24.175 06:19:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:24.175 06:19:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:24.175 06:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:24.175 ************************************ 00:24:24.175 START TEST raid_state_function_test_sb_4k 00:24:24.175 ************************************ 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:24.175 Process raid pid: 105232 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=105232 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 105232' 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 105232 /var/tmp/spdk-raid.sock 00:24:24.175 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 105232 ']' 00:24:24.176 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:24.176 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:24.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:24.176 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:24.176 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:24.176 06:19:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.434 [2024-08-13 06:19:26.042737] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:24:24.434 [2024-08-13 06:19:26.042870] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.434 [2024-08-13 06:19:26.191112] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.693 [2024-08-13 06:19:26.237118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.693 [2024-08-13 06:19:26.279828] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.693 [2024-08-13 06:19:26.279863] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.259 06:19:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:25.259 06:19:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:24:25.260 06:19:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:25.260 [2024-08-13 06:19:27.047635] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.260 [2024-08-13 06:19:27.047692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.260 [2024-08-13 06:19:27.047704] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.260 [2024-08-13 06:19:27.047711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.518 "name": "Existed_Raid", 00:24:25.518 "uuid": "572f4682-fcbc-4c4d-ae81-3b556c427bda", 00:24:25.518 "strip_size_kb": 0, 00:24:25.518 "state": "configuring", 00:24:25.518 "raid_level": "raid1", 00:24:25.518 "superblock": true, 00:24:25.518 "num_base_bdevs": 2, 00:24:25.518 "num_base_bdevs_discovered": 0, 00:24:25.518 "num_base_bdevs_operational": 2, 00:24:25.518 "base_bdevs_list": [ 00:24:25.518 { 00:24:25.518 "name": "BaseBdev1", 00:24:25.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.518 "is_configured": false, 00:24:25.518 "data_offset": 0, 00:24:25.518 "data_size": 0 00:24:25.518 }, 00:24:25.518 { 00:24:25.518 "name": "BaseBdev2", 00:24:25.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.518 "is_configured": false, 00:24:25.518 "data_offset": 0, 00:24:25.518 "data_size": 0 00:24:25.518 } 00:24:25.518 ] 00:24:25.518 }' 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.518 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.085 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:26.344 [2024-08-13 06:19:27.966091] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:26.344 [2024-08-13 06:19:27.966198] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:24:26.344 06:19:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:26.602 [2024-08-13 06:19:28.157744] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:26.602 [2024-08-13 06:19:28.157823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:26.602 [2024-08-13 06:19:28.157847] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.602 [2024-08-13 06:19:28.157854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:24:26.602 [2024-08-13 06:19:28.346114] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:26.602 BaseBdev1 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:26.602 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.860 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:27.119 [ 00:24:27.119 { 00:24:27.119 "name": "BaseBdev1", 00:24:27.119 "aliases": [ 00:24:27.119 "ea192505-8cbc-4c09-97be-4f36f6c6537d" 00:24:27.119 ], 00:24:27.119 "product_name": "Malloc disk", 00:24:27.119 "block_size": 4096, 00:24:27.119 "num_blocks": 8192, 00:24:27.119 "uuid": "ea192505-8cbc-4c09-97be-4f36f6c6537d", 00:24:27.119 "assigned_rate_limits": { 00:24:27.119 "rw_ios_per_sec": 0, 00:24:27.119 "rw_mbytes_per_sec": 0, 00:24:27.119 "r_mbytes_per_sec": 0, 00:24:27.119 "w_mbytes_per_sec": 0 00:24:27.119 }, 00:24:27.119 "claimed": true, 00:24:27.119 "claim_type": "exclusive_write", 00:24:27.119 "zoned": false, 00:24:27.119 "supported_io_types": { 00:24:27.119 "read": true, 00:24:27.119 "write": true, 00:24:27.119 "unmap": true, 00:24:27.119 "flush": true, 00:24:27.119 "reset": true, 00:24:27.119 "nvme_admin": false, 00:24:27.119 "nvme_io": false, 00:24:27.119 "nvme_io_md": false, 00:24:27.119 "write_zeroes": true, 00:24:27.119 "zcopy": true, 00:24:27.119 "get_zone_info": false, 00:24:27.119 "zone_management": false, 00:24:27.119 "zone_append": false, 00:24:27.119 "compare": false, 00:24:27.119 "compare_and_write": false, 00:24:27.119 "abort": true, 00:24:27.119 "seek_hole": false, 00:24:27.119 "seek_data": false, 00:24:27.119 "copy": true, 00:24:27.119 "nvme_iov_md": false 00:24:27.119 }, 00:24:27.119 "memory_domains": [ 00:24:27.119 { 00:24:27.119 "dma_device_id": "system", 00:24:27.119 "dma_device_type": 1 00:24:27.119 }, 00:24:27.119 { 00:24:27.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.120 "dma_device_type": 2 00:24:27.120 } 00:24:27.120 ], 00:24:27.120 "driver_specific": {} 00:24:27.120 } 00:24:27.120 ] 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.120 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.379 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.379 "name": "Existed_Raid", 00:24:27.379 "uuid": "f61cce0f-9606-459a-b276-dff80731bbcb", 00:24:27.379 "strip_size_kb": 0, 00:24:27.379 "state": "configuring", 00:24:27.379 "raid_level": "raid1", 00:24:27.379 "superblock": true, 00:24:27.379 "num_base_bdevs": 2, 00:24:27.379 "num_base_bdevs_discovered": 1, 00:24:27.379 "num_base_bdevs_operational": 2, 00:24:27.379 "base_bdevs_list": [ 00:24:27.379 { 00:24:27.379 "name": "BaseBdev1", 00:24:27.379 "uuid": "ea192505-8cbc-4c09-97be-4f36f6c6537d", 00:24:27.379 "is_configured": true, 00:24:27.379 "data_offset": 256, 00:24:27.379 "data_size": 7936 00:24:27.379 }, 00:24:27.379 { 00:24:27.379 "name": "BaseBdev2", 00:24:27.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.379 "is_configured": false, 00:24:27.379 "data_offset": 0, 00:24:27.379 "data_size": 0 00:24:27.379 } 00:24:27.379 ] 00:24:27.379 }' 00:24:27.379 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.379 06:19:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.947 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:27.947 [2024-08-13 06:19:29.703768] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.947 [2024-08-13 06:19:29.703831] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:24:27.947 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:24:28.218 [2024-08-13 06:19:29.891489] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:28.218 [2024-08-13 06:19:29.893181] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:28.218 [2024-08-13 06:19:29.893218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.218 06:19:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.484 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.484 "name": "Existed_Raid", 00:24:28.484 "uuid": "58e8942a-85b6-4e2f-beef-8f51626e1847", 00:24:28.484 "strip_size_kb": 0, 00:24:28.484 "state": "configuring", 00:24:28.484 "raid_level": "raid1", 00:24:28.484 "superblock": true, 00:24:28.484 "num_base_bdevs": 2, 00:24:28.484 "num_base_bdevs_discovered": 1, 00:24:28.484 "num_base_bdevs_operational": 2, 00:24:28.484 "base_bdevs_list": [ 00:24:28.484 { 00:24:28.484 "name": "BaseBdev1", 00:24:28.484 "uuid": "ea192505-8cbc-4c09-97be-4f36f6c6537d", 00:24:28.484 "is_configured": true, 00:24:28.484 "data_offset": 256, 00:24:28.485 "data_size": 7936 00:24:28.485 }, 00:24:28.485 { 00:24:28.485 "name": "BaseBdev2", 00:24:28.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.485 "is_configured": false, 00:24:28.485 "data_offset": 0, 00:24:28.485 "data_size": 0 00:24:28.485 } 00:24:28.485 ] 00:24:28.485 }' 00:24:28.485 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.485 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.090 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:24:29.090 [2024-08-13 06:19:30.861971] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:29.090 [2024-08-13 06:19:30.862590] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:24:29.090 [2024-08-13 06:19:30.862675] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:29.090 BaseBdev2 00:24:29.090 [2024-08-13 06:19:30.863663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:24:29.090 [2024-08-13 06:19:30.864267] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:24:29.090 [2024-08-13 06:19:30.864312] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:24:29.090 [2024-08-13 06:19:30.864714] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:29.349 06:19:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.349 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:29.609 [ 00:24:29.609 { 00:24:29.609 "name": "BaseBdev2", 00:24:29.609 "aliases": [ 00:24:29.609 "e443a644-0daa-4e8d-8b9e-996ee2bd2630" 00:24:29.609 ], 00:24:29.609 "product_name": "Malloc disk", 00:24:29.609 "block_size": 4096, 00:24:29.609 "num_blocks": 8192, 00:24:29.609 "uuid": "e443a644-0daa-4e8d-8b9e-996ee2bd2630", 00:24:29.609 "assigned_rate_limits": { 00:24:29.609 "rw_ios_per_sec": 0, 00:24:29.609 "rw_mbytes_per_sec": 0, 00:24:29.609 "r_mbytes_per_sec": 0, 00:24:29.609 "w_mbytes_per_sec": 0 00:24:29.609 }, 00:24:29.609 "claimed": true, 00:24:29.609 "claim_type": "exclusive_write", 00:24:29.609 "zoned": false, 00:24:29.609 "supported_io_types": { 00:24:29.609 "read": true, 00:24:29.609 "write": true, 00:24:29.609 "unmap": true, 00:24:29.609 "flush": true, 00:24:29.609 "reset": true, 00:24:29.609 "nvme_admin": false, 00:24:29.609 "nvme_io": false, 00:24:29.609 "nvme_io_md": false, 00:24:29.609 "write_zeroes": true, 00:24:29.609 "zcopy": true, 00:24:29.609 "get_zone_info": false, 00:24:29.609 "zone_management": false, 00:24:29.609 "zone_append": false, 00:24:29.609 "compare": false, 00:24:29.609 "compare_and_write": false, 00:24:29.609 "abort": true, 00:24:29.609 "seek_hole": false, 00:24:29.609 "seek_data": false, 00:24:29.609 "copy": true, 00:24:29.609 "nvme_iov_md": false 00:24:29.609 }, 00:24:29.609 "memory_domains": [ 00:24:29.609 { 00:24:29.609 "dma_device_id": "system", 00:24:29.609 "dma_device_type": 1 00:24:29.609 }, 00:24:29.609 { 00:24:29.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.609 "dma_device_type": 2 00:24:29.609 } 00:24:29.609 ], 00:24:29.609 "driver_specific": {} 00:24:29.609 } 00:24:29.609 ] 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.609 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.868 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.868 "name": "Existed_Raid", 00:24:29.868 "uuid": "58e8942a-85b6-4e2f-beef-8f51626e1847", 00:24:29.868 "strip_size_kb": 0, 00:24:29.868 "state": "online", 00:24:29.868 "raid_level": "raid1", 00:24:29.868 "superblock": true, 00:24:29.868 "num_base_bdevs": 2, 00:24:29.868 "num_base_bdevs_discovered": 2, 00:24:29.868 "num_base_bdevs_operational": 2, 00:24:29.868 "base_bdevs_list": [ 00:24:29.868 { 00:24:29.868 "name": "BaseBdev1", 00:24:29.868 "uuid": "ea192505-8cbc-4c09-97be-4f36f6c6537d", 00:24:29.868 "is_configured": true, 00:24:29.868 "data_offset": 256, 00:24:29.868 "data_size": 7936 00:24:29.868 }, 00:24:29.868 { 00:24:29.868 "name": "BaseBdev2", 00:24:29.868 "uuid": "e443a644-0daa-4e8d-8b9e-996ee2bd2630", 00:24:29.868 "is_configured": true, 00:24:29.868 "data_offset": 256, 00:24:29.868 "data_size": 7936 00:24:29.868 } 00:24:29.868 ] 00:24:29.868 }' 00:24:29.868 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.868 06:19:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:30.437 [2024-08-13 06:19:32.199869] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:30.437 "name": "Existed_Raid", 00:24:30.437 "aliases": [ 00:24:30.437 "58e8942a-85b6-4e2f-beef-8f51626e1847" 00:24:30.437 ], 00:24:30.437 "product_name": "Raid Volume", 00:24:30.437 "block_size": 4096, 00:24:30.437 "num_blocks": 7936, 00:24:30.437 "uuid": "58e8942a-85b6-4e2f-beef-8f51626e1847", 00:24:30.437 "assigned_rate_limits": { 00:24:30.437 "rw_ios_per_sec": 0, 00:24:30.437 "rw_mbytes_per_sec": 0, 00:24:30.437 "r_mbytes_per_sec": 0, 00:24:30.437 "w_mbytes_per_sec": 0 00:24:30.437 }, 00:24:30.437 "claimed": false, 00:24:30.437 "zoned": false, 00:24:30.437 "supported_io_types": { 00:24:30.437 "read": true, 00:24:30.437 "write": true, 00:24:30.437 "unmap": false, 00:24:30.437 "flush": false, 00:24:30.437 "reset": true, 00:24:30.437 "nvme_admin": false, 00:24:30.437 "nvme_io": false, 00:24:30.437 "nvme_io_md": false, 00:24:30.437 "write_zeroes": true, 00:24:30.437 "zcopy": false, 00:24:30.437 "get_zone_info": false, 00:24:30.437 "zone_management": false, 00:24:30.437 "zone_append": false, 00:24:30.437 "compare": false, 00:24:30.437 "compare_and_write": false, 00:24:30.437 "abort": false, 00:24:30.437 "seek_hole": false, 00:24:30.437 "seek_data": false, 00:24:30.437 "copy": false, 00:24:30.437 "nvme_iov_md": false 00:24:30.437 }, 00:24:30.437 "memory_domains": [ 00:24:30.437 { 00:24:30.437 "dma_device_id": "system", 00:24:30.437 "dma_device_type": 1 00:24:30.437 }, 00:24:30.437 { 00:24:30.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.437 "dma_device_type": 2 00:24:30.437 }, 00:24:30.437 { 00:24:30.437 "dma_device_id": "system", 00:24:30.437 "dma_device_type": 1 00:24:30.437 }, 00:24:30.437 { 00:24:30.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.437 "dma_device_type": 2 00:24:30.437 } 00:24:30.437 ], 00:24:30.437 "driver_specific": { 00:24:30.437 "raid": { 00:24:30.437 "uuid": "58e8942a-85b6-4e2f-beef-8f51626e1847", 00:24:30.437 "strip_size_kb": 0, 00:24:30.437 "state": "online", 00:24:30.437 "raid_level": "raid1", 00:24:30.437 "superblock": true, 00:24:30.437 "num_base_bdevs": 2, 00:24:30.437 "num_base_bdevs_discovered": 2, 00:24:30.437 "num_base_bdevs_operational": 2, 00:24:30.437 "base_bdevs_list": [ 00:24:30.437 { 00:24:30.437 "name": "BaseBdev1", 00:24:30.437 "uuid": "ea192505-8cbc-4c09-97be-4f36f6c6537d", 00:24:30.437 "is_configured": true, 00:24:30.437 "data_offset": 256, 00:24:30.437 "data_size": 7936 00:24:30.437 }, 00:24:30.437 { 00:24:30.437 "name": "BaseBdev2", 00:24:30.437 "uuid": "e443a644-0daa-4e8d-8b9e-996ee2bd2630", 00:24:30.437 "is_configured": true, 00:24:30.437 "data_offset": 256, 00:24:30.437 "data_size": 7936 00:24:30.437 } 00:24:30.437 ] 00:24:30.437 } 00:24:30.437 } 00:24:30.437 }' 00:24:30.437 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:30.697 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:30.697 BaseBdev2' 00:24:30.697 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.697 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:30.697 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.697 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.697 "name": "BaseBdev1", 00:24:30.697 "aliases": [ 00:24:30.697 "ea192505-8cbc-4c09-97be-4f36f6c6537d" 00:24:30.697 ], 00:24:30.697 "product_name": "Malloc disk", 00:24:30.697 "block_size": 4096, 00:24:30.697 "num_blocks": 8192, 00:24:30.697 "uuid": "ea192505-8cbc-4c09-97be-4f36f6c6537d", 00:24:30.697 "assigned_rate_limits": { 00:24:30.697 "rw_ios_per_sec": 0, 00:24:30.697 "rw_mbytes_per_sec": 0, 00:24:30.697 "r_mbytes_per_sec": 0, 00:24:30.697 "w_mbytes_per_sec": 0 00:24:30.697 }, 00:24:30.697 "claimed": true, 00:24:30.697 "claim_type": "exclusive_write", 00:24:30.697 "zoned": false, 00:24:30.697 "supported_io_types": { 00:24:30.697 "read": true, 00:24:30.697 "write": true, 00:24:30.697 "unmap": true, 00:24:30.697 "flush": true, 00:24:30.697 "reset": true, 00:24:30.697 "nvme_admin": false, 00:24:30.697 "nvme_io": false, 00:24:30.697 "nvme_io_md": false, 00:24:30.697 "write_zeroes": true, 00:24:30.697 "zcopy": true, 00:24:30.697 "get_zone_info": false, 00:24:30.697 "zone_management": false, 00:24:30.697 "zone_append": false, 00:24:30.697 "compare": false, 00:24:30.697 "compare_and_write": false, 00:24:30.697 "abort": true, 00:24:30.697 "seek_hole": false, 00:24:30.697 "seek_data": false, 00:24:30.697 "copy": true, 00:24:30.697 "nvme_iov_md": false 00:24:30.697 }, 00:24:30.697 "memory_domains": [ 00:24:30.697 { 00:24:30.697 "dma_device_id": "system", 00:24:30.697 "dma_device_type": 1 00:24:30.697 }, 00:24:30.697 { 00:24:30.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.697 "dma_device_type": 2 00:24:30.697 } 00:24:30.697 ], 00:24:30.697 "driver_specific": {} 00:24:30.697 }' 00:24:30.697 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.957 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.216 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.216 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.216 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.216 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:31.216 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.216 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.216 "name": "BaseBdev2", 00:24:31.216 "aliases": [ 00:24:31.216 "e443a644-0daa-4e8d-8b9e-996ee2bd2630" 00:24:31.216 ], 00:24:31.216 "product_name": "Malloc disk", 00:24:31.216 "block_size": 4096, 00:24:31.216 "num_blocks": 8192, 00:24:31.216 "uuid": "e443a644-0daa-4e8d-8b9e-996ee2bd2630", 00:24:31.216 "assigned_rate_limits": { 00:24:31.216 "rw_ios_per_sec": 0, 00:24:31.216 "rw_mbytes_per_sec": 0, 00:24:31.217 "r_mbytes_per_sec": 0, 00:24:31.217 "w_mbytes_per_sec": 0 00:24:31.217 }, 00:24:31.217 "claimed": true, 00:24:31.217 "claim_type": "exclusive_write", 00:24:31.217 "zoned": false, 00:24:31.217 "supported_io_types": { 00:24:31.217 "read": true, 00:24:31.217 "write": true, 00:24:31.217 "unmap": true, 00:24:31.217 "flush": true, 00:24:31.217 "reset": true, 00:24:31.217 "nvme_admin": false, 00:24:31.217 "nvme_io": false, 00:24:31.217 "nvme_io_md": false, 00:24:31.217 "write_zeroes": true, 00:24:31.217 "zcopy": true, 00:24:31.217 "get_zone_info": false, 00:24:31.217 "zone_management": false, 00:24:31.217 "zone_append": false, 00:24:31.217 "compare": false, 00:24:31.217 "compare_and_write": false, 00:24:31.217 "abort": true, 00:24:31.217 "seek_hole": false, 00:24:31.217 "seek_data": false, 00:24:31.217 "copy": true, 00:24:31.217 "nvme_iov_md": false 00:24:31.217 }, 00:24:31.217 "memory_domains": [ 00:24:31.217 { 00:24:31.217 "dma_device_id": "system", 00:24:31.217 "dma_device_type": 1 00:24:31.217 }, 00:24:31.217 { 00:24:31.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.217 "dma_device_type": 2 00:24:31.217 } 00:24:31.217 ], 00:24:31.217 "driver_specific": {} 00:24:31.217 }' 00:24:31.217 06:19:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.476 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.476 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:24:31.476 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.476 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.477 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.477 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.477 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.477 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.477 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.477 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:31.736 [2024-08-13 06:19:33.477678] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.736 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.737 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.737 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.737 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.737 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.996 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.996 "name": "Existed_Raid", 00:24:31.996 "uuid": "58e8942a-85b6-4e2f-beef-8f51626e1847", 00:24:31.996 "strip_size_kb": 0, 00:24:31.996 "state": "online", 00:24:31.996 "raid_level": "raid1", 00:24:31.996 "superblock": true, 00:24:31.996 "num_base_bdevs": 2, 00:24:31.996 "num_base_bdevs_discovered": 1, 00:24:31.996 "num_base_bdevs_operational": 1, 00:24:31.996 "base_bdevs_list": [ 00:24:31.996 { 00:24:31.996 "name": null, 00:24:31.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.996 "is_configured": false, 00:24:31.996 "data_offset": 256, 00:24:31.996 "data_size": 7936 00:24:31.996 }, 00:24:31.996 { 00:24:31.996 "name": "BaseBdev2", 00:24:31.996 "uuid": "e443a644-0daa-4e8d-8b9e-996ee2bd2630", 00:24:31.996 "is_configured": true, 00:24:31.996 "data_offset": 256, 00:24:31.996 "data_size": 7936 00:24:31.996 } 00:24:31.996 ] 00:24:31.996 }' 00:24:31.997 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.997 06:19:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.564 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:32.564 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:32.564 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.564 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:32.823 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:32.823 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:32.823 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:33.083 [2024-08-13 06:19:34.655003] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:33.083 [2024-08-13 06:19:34.655181] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.083 [2024-08-13 06:19:34.666197] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.083 [2024-08-13 06:19:34.666311] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.083 [2024-08-13 06:19:34.666347] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:24:33.083 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:33.083 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:33.083 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.083 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 105232 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 105232 ']' 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 105232 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105232 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105232' 00:24:33.343 killing process with pid 105232 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 105232 00:24:33.343 [2024-08-13 06:19:34.949056] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.343 06:19:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 105232 00:24:33.343 [2024-08-13 06:19:34.950062] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:33.603 06:19:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:24:33.603 ************************************ 00:24:33.603 END TEST raid_state_function_test_sb_4k 00:24:33.603 ************************************ 00:24:33.603 00:24:33.603 real 0m9.249s 00:24:33.603 user 0m16.406s 00:24:33.603 sys 0m1.617s 00:24:33.603 06:19:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:33.603 06:19:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.603 06:19:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:33.603 06:19:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:33.603 06:19:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:33.603 06:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:33.603 ************************************ 00:24:33.603 START TEST raid_superblock_test_4k 00:24:33.603 ************************************ 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=105566 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 105566 /var/tmp/spdk-raid.sock 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 105566 ']' 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:33.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.603 06:19:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.603 [2024-08-13 06:19:35.363594] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:24:33.604 [2024-08-13 06:19:35.363726] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105566 ] 00:24:33.864 [2024-08-13 06:19:35.511178] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.864 [2024-08-13 06:19:35.557610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.864 [2024-08-13 06:19:35.600412] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:33.864 [2024-08-13 06:19:35.600448] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:34.433 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:24:34.693 malloc1 00:24:34.693 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:34.952 [2024-08-13 06:19:36.576529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:34.952 [2024-08-13 06:19:36.576677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.952 [2024-08-13 06:19:36.576718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:34.952 [2024-08-13 06:19:36.576746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.952 [2024-08-13 06:19:36.578795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.952 [2024-08-13 06:19:36.578868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:34.952 pt1 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:34.952 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:24:35.212 malloc2 00:24:35.212 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:35.212 [2024-08-13 06:19:36.968506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:35.212 [2024-08-13 06:19:36.968663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.212 [2024-08-13 06:19:36.968698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:35.212 [2024-08-13 06:19:36.968725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.212 [2024-08-13 06:19:36.970807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.212 [2024-08-13 06:19:36.970881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:35.212 pt2 00:24:35.212 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:35.212 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:35.212 06:19:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:24:35.471 [2024-08-13 06:19:37.156199] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:35.471 [2024-08-13 06:19:37.157918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:35.471 [2024-08-13 06:19:37.158125] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:24:35.471 [2024-08-13 06:19:37.158170] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:35.471 [2024-08-13 06:19:37.158490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:24:35.471 [2024-08-13 06:19:37.158670] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:24:35.471 [2024-08-13 06:19:37.158720] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:24:35.471 [2024-08-13 06:19:37.158902] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:35.471 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:35.472 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.472 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.731 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:35.731 "name": "raid_bdev1", 00:24:35.731 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:35.731 "strip_size_kb": 0, 00:24:35.731 "state": "online", 00:24:35.731 "raid_level": "raid1", 00:24:35.731 "superblock": true, 00:24:35.731 "num_base_bdevs": 2, 00:24:35.731 "num_base_bdevs_discovered": 2, 00:24:35.731 "num_base_bdevs_operational": 2, 00:24:35.731 "base_bdevs_list": [ 00:24:35.731 { 00:24:35.731 "name": "pt1", 00:24:35.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:35.731 "is_configured": true, 00:24:35.731 "data_offset": 256, 00:24:35.731 "data_size": 7936 00:24:35.731 }, 00:24:35.731 { 00:24:35.731 "name": "pt2", 00:24:35.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:35.731 "is_configured": true, 00:24:35.731 "data_offset": 256, 00:24:35.731 "data_size": 7936 00:24:35.731 } 00:24:35.731 ] 00:24:35.731 }' 00:24:35.731 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:35.731 06:19:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:36.300 06:19:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:36.559 [2024-08-13 06:19:38.118798] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:36.559 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:36.559 "name": "raid_bdev1", 00:24:36.559 "aliases": [ 00:24:36.559 "57af8c1e-2795-47e8-9445-bc43af56e65a" 00:24:36.559 ], 00:24:36.560 "product_name": "Raid Volume", 00:24:36.560 "block_size": 4096, 00:24:36.560 "num_blocks": 7936, 00:24:36.560 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:36.560 "assigned_rate_limits": { 00:24:36.560 "rw_ios_per_sec": 0, 00:24:36.560 "rw_mbytes_per_sec": 0, 00:24:36.560 "r_mbytes_per_sec": 0, 00:24:36.560 "w_mbytes_per_sec": 0 00:24:36.560 }, 00:24:36.560 "claimed": false, 00:24:36.560 "zoned": false, 00:24:36.560 "supported_io_types": { 00:24:36.560 "read": true, 00:24:36.560 "write": true, 00:24:36.560 "unmap": false, 00:24:36.560 "flush": false, 00:24:36.560 "reset": true, 00:24:36.560 "nvme_admin": false, 00:24:36.560 "nvme_io": false, 00:24:36.560 "nvme_io_md": false, 00:24:36.560 "write_zeroes": true, 00:24:36.560 "zcopy": false, 00:24:36.560 "get_zone_info": false, 00:24:36.560 "zone_management": false, 00:24:36.560 "zone_append": false, 00:24:36.560 "compare": false, 00:24:36.560 "compare_and_write": false, 00:24:36.560 "abort": false, 00:24:36.560 "seek_hole": false, 00:24:36.560 "seek_data": false, 00:24:36.560 "copy": false, 00:24:36.560 "nvme_iov_md": false 00:24:36.560 }, 00:24:36.560 "memory_domains": [ 00:24:36.560 { 00:24:36.560 "dma_device_id": "system", 00:24:36.560 "dma_device_type": 1 00:24:36.560 }, 00:24:36.560 { 00:24:36.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.560 "dma_device_type": 2 00:24:36.560 }, 00:24:36.560 { 00:24:36.560 "dma_device_id": "system", 00:24:36.560 "dma_device_type": 1 00:24:36.560 }, 00:24:36.560 { 00:24:36.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.560 "dma_device_type": 2 00:24:36.560 } 00:24:36.560 ], 00:24:36.560 "driver_specific": { 00:24:36.560 "raid": { 00:24:36.560 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:36.560 "strip_size_kb": 0, 00:24:36.560 "state": "online", 00:24:36.560 "raid_level": "raid1", 00:24:36.560 "superblock": true, 00:24:36.560 "num_base_bdevs": 2, 00:24:36.560 "num_base_bdevs_discovered": 2, 00:24:36.560 "num_base_bdevs_operational": 2, 00:24:36.560 "base_bdevs_list": [ 00:24:36.560 { 00:24:36.560 "name": "pt1", 00:24:36.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:36.560 "is_configured": true, 00:24:36.560 "data_offset": 256, 00:24:36.560 "data_size": 7936 00:24:36.560 }, 00:24:36.560 { 00:24:36.560 "name": "pt2", 00:24:36.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:36.560 "is_configured": true, 00:24:36.560 "data_offset": 256, 00:24:36.560 "data_size": 7936 00:24:36.560 } 00:24:36.560 ] 00:24:36.560 } 00:24:36.560 } 00:24:36.560 }' 00:24:36.560 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:36.560 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:36.560 pt2' 00:24:36.560 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:36.560 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:36.560 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:36.819 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:36.819 "name": "pt1", 00:24:36.819 "aliases": [ 00:24:36.819 "00000000-0000-0000-0000-000000000001" 00:24:36.819 ], 00:24:36.819 "product_name": "passthru", 00:24:36.819 "block_size": 4096, 00:24:36.819 "num_blocks": 8192, 00:24:36.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:36.819 "assigned_rate_limits": { 00:24:36.819 "rw_ios_per_sec": 0, 00:24:36.819 "rw_mbytes_per_sec": 0, 00:24:36.819 "r_mbytes_per_sec": 0, 00:24:36.819 "w_mbytes_per_sec": 0 00:24:36.819 }, 00:24:36.819 "claimed": true, 00:24:36.819 "claim_type": "exclusive_write", 00:24:36.819 "zoned": false, 00:24:36.819 "supported_io_types": { 00:24:36.819 "read": true, 00:24:36.819 "write": true, 00:24:36.819 "unmap": true, 00:24:36.819 "flush": true, 00:24:36.819 "reset": true, 00:24:36.819 "nvme_admin": false, 00:24:36.819 "nvme_io": false, 00:24:36.819 "nvme_io_md": false, 00:24:36.819 "write_zeroes": true, 00:24:36.819 "zcopy": true, 00:24:36.819 "get_zone_info": false, 00:24:36.819 "zone_management": false, 00:24:36.819 "zone_append": false, 00:24:36.819 "compare": false, 00:24:36.819 "compare_and_write": false, 00:24:36.819 "abort": true, 00:24:36.819 "seek_hole": false, 00:24:36.819 "seek_data": false, 00:24:36.819 "copy": true, 00:24:36.819 "nvme_iov_md": false 00:24:36.819 }, 00:24:36.820 "memory_domains": [ 00:24:36.820 { 00:24:36.820 "dma_device_id": "system", 00:24:36.820 "dma_device_type": 1 00:24:36.820 }, 00:24:36.820 { 00:24:36.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.820 "dma_device_type": 2 00:24:36.820 } 00:24:36.820 ], 00:24:36.820 "driver_specific": { 00:24:36.820 "passthru": { 00:24:36.820 "name": "pt1", 00:24:36.820 "base_bdev_name": "malloc1" 00:24:36.820 } 00:24:36.820 } 00:24:36.820 }' 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.820 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.079 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:37.339 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.339 "name": "pt2", 00:24:37.339 "aliases": [ 00:24:37.339 "00000000-0000-0000-0000-000000000002" 00:24:37.339 ], 00:24:37.339 "product_name": "passthru", 00:24:37.339 "block_size": 4096, 00:24:37.339 "num_blocks": 8192, 00:24:37.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:37.339 "assigned_rate_limits": { 00:24:37.339 "rw_ios_per_sec": 0, 00:24:37.339 "rw_mbytes_per_sec": 0, 00:24:37.339 "r_mbytes_per_sec": 0, 00:24:37.339 "w_mbytes_per_sec": 0 00:24:37.339 }, 00:24:37.339 "claimed": true, 00:24:37.339 "claim_type": "exclusive_write", 00:24:37.339 "zoned": false, 00:24:37.339 "supported_io_types": { 00:24:37.339 "read": true, 00:24:37.339 "write": true, 00:24:37.339 "unmap": true, 00:24:37.339 "flush": true, 00:24:37.339 "reset": true, 00:24:37.339 "nvme_admin": false, 00:24:37.339 "nvme_io": false, 00:24:37.339 "nvme_io_md": false, 00:24:37.339 "write_zeroes": true, 00:24:37.339 "zcopy": true, 00:24:37.339 "get_zone_info": false, 00:24:37.339 "zone_management": false, 00:24:37.339 "zone_append": false, 00:24:37.339 "compare": false, 00:24:37.339 "compare_and_write": false, 00:24:37.339 "abort": true, 00:24:37.339 "seek_hole": false, 00:24:37.339 "seek_data": false, 00:24:37.339 "copy": true, 00:24:37.339 "nvme_iov_md": false 00:24:37.339 }, 00:24:37.339 "memory_domains": [ 00:24:37.339 { 00:24:37.339 "dma_device_id": "system", 00:24:37.339 "dma_device_type": 1 00:24:37.339 }, 00:24:37.339 { 00:24:37.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.339 "dma_device_type": 2 00:24:37.339 } 00:24:37.339 ], 00:24:37.339 "driver_specific": { 00:24:37.339 "passthru": { 00:24:37.339 "name": "pt2", 00:24:37.339 "base_bdev_name": "malloc2" 00:24:37.339 } 00:24:37.339 } 00:24:37.339 }' 00:24:37.339 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.339 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.339 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:24:37.339 06:19:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.339 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.339 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:37.339 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.339 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.598 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.598 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.598 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.598 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:37.598 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:37.598 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:24:37.857 [2024-08-13 06:19:39.444476] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.857 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=57af8c1e-2795-47e8-9445-bc43af56e65a 00:24:37.857 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 57af8c1e-2795-47e8-9445-bc43af56e65a ']' 00:24:37.857 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:38.117 [2024-08-13 06:19:39.656008] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.117 [2024-08-13 06:19:39.656086] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:38.117 [2024-08-13 06:19:39.656153] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.117 [2024-08-13 06:19:39.656208] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.117 [2024-08-13 06:19:39.656220] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:24:38.117 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.117 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:24:38.117 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:24:38.117 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:24:38.117 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:38.117 06:19:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:38.376 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:38.376 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:38.636 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:38.636 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@646 -- # local es=0 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:38.899 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:24:38.899 [2024-08-13 06:19:40.682197] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:38.899 [2024-08-13 06:19:40.683944] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:38.899 [2024-08-13 06:19:40.684055] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:38.899 [2024-08-13 06:19:40.684151] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:38.899 [2024-08-13 06:19:40.684186] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.899 [2024-08-13 06:19:40.684208] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:24:38.899 request: 00:24:38.899 { 00:24:38.899 "name": "raid_bdev1", 00:24:38.899 "raid_level": "raid1", 00:24:38.899 "base_bdevs": [ 00:24:38.899 "malloc1", 00:24:38.899 "malloc2" 00:24:38.899 ], 00:24:38.899 "superblock": false, 00:24:38.899 "method": "bdev_raid_create", 00:24:38.899 "req_id": 1 00:24:38.899 } 00:24:38.899 Got JSON-RPC error response 00:24:38.899 response: 00:24:38.899 { 00:24:38.899 "code": -17, 00:24:38.899 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:38.899 } 00:24:39.158 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # es=1 00:24:39.158 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:39.158 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:39.158 06:19:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:39.159 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.159 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:24:39.159 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:24:39.159 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:24:39.159 06:19:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:39.418 [2024-08-13 06:19:41.073472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:39.418 [2024-08-13 06:19:41.073592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.418 [2024-08-13 06:19:41.073624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:39.418 [2024-08-13 06:19:41.073653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.418 [2024-08-13 06:19:41.075664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.418 [2024-08-13 06:19:41.075741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:39.418 [2024-08-13 06:19:41.075821] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:39.418 [2024-08-13 06:19:41.075859] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:39.418 pt1 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.418 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.677 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.677 "name": "raid_bdev1", 00:24:39.677 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:39.677 "strip_size_kb": 0, 00:24:39.677 "state": "configuring", 00:24:39.677 "raid_level": "raid1", 00:24:39.677 "superblock": true, 00:24:39.677 "num_base_bdevs": 2, 00:24:39.677 "num_base_bdevs_discovered": 1, 00:24:39.677 "num_base_bdevs_operational": 2, 00:24:39.677 "base_bdevs_list": [ 00:24:39.677 { 00:24:39.677 "name": "pt1", 00:24:39.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.677 "is_configured": true, 00:24:39.677 "data_offset": 256, 00:24:39.677 "data_size": 7936 00:24:39.677 }, 00:24:39.677 { 00:24:39.677 "name": null, 00:24:39.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.677 "is_configured": false, 00:24:39.677 "data_offset": 256, 00:24:39.677 "data_size": 7936 00:24:39.677 } 00:24:39.677 ] 00:24:39.677 }' 00:24:39.677 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.677 06:19:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.245 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:24:40.245 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.246 [2024-08-13 06:19:41.972004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.246 [2024-08-13 06:19:41.972155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.246 [2024-08-13 06:19:41.972198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:40.246 [2024-08-13 06:19:41.972228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.246 [2024-08-13 06:19:41.972699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.246 [2024-08-13 06:19:41.972767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.246 [2024-08-13 06:19:41.972858] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:40.246 [2024-08-13 06:19:41.972896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:40.246 [2024-08-13 06:19:41.973009] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:24:40.246 [2024-08-13 06:19:41.973060] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:40.246 [2024-08-13 06:19:41.973336] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:40.246 [2024-08-13 06:19:41.973479] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:24:40.246 [2024-08-13 06:19:41.973515] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:24:40.246 [2024-08-13 06:19:41.973641] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.246 pt2 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.246 06:19:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.505 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.505 "name": "raid_bdev1", 00:24:40.505 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:40.505 "strip_size_kb": 0, 00:24:40.505 "state": "online", 00:24:40.505 "raid_level": "raid1", 00:24:40.505 "superblock": true, 00:24:40.505 "num_base_bdevs": 2, 00:24:40.505 "num_base_bdevs_discovered": 2, 00:24:40.505 "num_base_bdevs_operational": 2, 00:24:40.505 "base_bdevs_list": [ 00:24:40.505 { 00:24:40.505 "name": "pt1", 00:24:40.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:40.505 "is_configured": true, 00:24:40.505 "data_offset": 256, 00:24:40.505 "data_size": 7936 00:24:40.505 }, 00:24:40.505 { 00:24:40.505 "name": "pt2", 00:24:40.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.505 "is_configured": true, 00:24:40.505 "data_offset": 256, 00:24:40.505 "data_size": 7936 00:24:40.505 } 00:24:40.505 ] 00:24:40.505 }' 00:24:40.505 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.505 06:19:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:41.074 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:41.333 [2024-08-13 06:19:42.961180] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.333 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:41.333 "name": "raid_bdev1", 00:24:41.333 "aliases": [ 00:24:41.333 "57af8c1e-2795-47e8-9445-bc43af56e65a" 00:24:41.333 ], 00:24:41.333 "product_name": "Raid Volume", 00:24:41.333 "block_size": 4096, 00:24:41.333 "num_blocks": 7936, 00:24:41.333 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:41.333 "assigned_rate_limits": { 00:24:41.333 "rw_ios_per_sec": 0, 00:24:41.333 "rw_mbytes_per_sec": 0, 00:24:41.333 "r_mbytes_per_sec": 0, 00:24:41.333 "w_mbytes_per_sec": 0 00:24:41.333 }, 00:24:41.333 "claimed": false, 00:24:41.333 "zoned": false, 00:24:41.333 "supported_io_types": { 00:24:41.333 "read": true, 00:24:41.333 "write": true, 00:24:41.333 "unmap": false, 00:24:41.333 "flush": false, 00:24:41.333 "reset": true, 00:24:41.333 "nvme_admin": false, 00:24:41.333 "nvme_io": false, 00:24:41.333 "nvme_io_md": false, 00:24:41.333 "write_zeroes": true, 00:24:41.333 "zcopy": false, 00:24:41.333 "get_zone_info": false, 00:24:41.333 "zone_management": false, 00:24:41.333 "zone_append": false, 00:24:41.333 "compare": false, 00:24:41.333 "compare_and_write": false, 00:24:41.333 "abort": false, 00:24:41.333 "seek_hole": false, 00:24:41.333 "seek_data": false, 00:24:41.333 "copy": false, 00:24:41.333 "nvme_iov_md": false 00:24:41.333 }, 00:24:41.333 "memory_domains": [ 00:24:41.333 { 00:24:41.333 "dma_device_id": "system", 00:24:41.333 "dma_device_type": 1 00:24:41.333 }, 00:24:41.333 { 00:24:41.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.333 "dma_device_type": 2 00:24:41.333 }, 00:24:41.333 { 00:24:41.333 "dma_device_id": "system", 00:24:41.333 "dma_device_type": 1 00:24:41.333 }, 00:24:41.333 { 00:24:41.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.333 "dma_device_type": 2 00:24:41.333 } 00:24:41.333 ], 00:24:41.333 "driver_specific": { 00:24:41.333 "raid": { 00:24:41.333 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:41.333 "strip_size_kb": 0, 00:24:41.333 "state": "online", 00:24:41.333 "raid_level": "raid1", 00:24:41.333 "superblock": true, 00:24:41.333 "num_base_bdevs": 2, 00:24:41.333 "num_base_bdevs_discovered": 2, 00:24:41.333 "num_base_bdevs_operational": 2, 00:24:41.333 "base_bdevs_list": [ 00:24:41.333 { 00:24:41.333 "name": "pt1", 00:24:41.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:41.333 "is_configured": true, 00:24:41.333 "data_offset": 256, 00:24:41.333 "data_size": 7936 00:24:41.333 }, 00:24:41.333 { 00:24:41.333 "name": "pt2", 00:24:41.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.333 "is_configured": true, 00:24:41.333 "data_offset": 256, 00:24:41.333 "data_size": 7936 00:24:41.333 } 00:24:41.333 ] 00:24:41.333 } 00:24:41.333 } 00:24:41.333 }' 00:24:41.333 06:19:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:41.333 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:41.333 pt2' 00:24:41.333 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:41.333 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:41.333 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:41.592 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:41.592 "name": "pt1", 00:24:41.592 "aliases": [ 00:24:41.592 "00000000-0000-0000-0000-000000000001" 00:24:41.592 ], 00:24:41.592 "product_name": "passthru", 00:24:41.592 "block_size": 4096, 00:24:41.592 "num_blocks": 8192, 00:24:41.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:41.592 "assigned_rate_limits": { 00:24:41.592 "rw_ios_per_sec": 0, 00:24:41.592 "rw_mbytes_per_sec": 0, 00:24:41.592 "r_mbytes_per_sec": 0, 00:24:41.592 "w_mbytes_per_sec": 0 00:24:41.592 }, 00:24:41.592 "claimed": true, 00:24:41.592 "claim_type": "exclusive_write", 00:24:41.592 "zoned": false, 00:24:41.592 "supported_io_types": { 00:24:41.592 "read": true, 00:24:41.592 "write": true, 00:24:41.592 "unmap": true, 00:24:41.592 "flush": true, 00:24:41.592 "reset": true, 00:24:41.592 "nvme_admin": false, 00:24:41.592 "nvme_io": false, 00:24:41.592 "nvme_io_md": false, 00:24:41.592 "write_zeroes": true, 00:24:41.592 "zcopy": true, 00:24:41.592 "get_zone_info": false, 00:24:41.592 "zone_management": false, 00:24:41.592 "zone_append": false, 00:24:41.593 "compare": false, 00:24:41.593 "compare_and_write": false, 00:24:41.593 "abort": true, 00:24:41.593 "seek_hole": false, 00:24:41.593 "seek_data": false, 00:24:41.593 "copy": true, 00:24:41.593 "nvme_iov_md": false 00:24:41.593 }, 00:24:41.593 "memory_domains": [ 00:24:41.593 { 00:24:41.593 "dma_device_id": "system", 00:24:41.593 "dma_device_type": 1 00:24:41.593 }, 00:24:41.593 { 00:24:41.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.593 "dma_device_type": 2 00:24:41.593 } 00:24:41.593 ], 00:24:41.593 "driver_specific": { 00:24:41.593 "passthru": { 00:24:41.593 "name": "pt1", 00:24:41.593 "base_bdev_name": "malloc1" 00:24:41.593 } 00:24:41.593 } 00:24:41.593 }' 00:24:41.593 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.593 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.593 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:24:41.593 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:41.852 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:42.114 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:42.114 "name": "pt2", 00:24:42.114 "aliases": [ 00:24:42.114 "00000000-0000-0000-0000-000000000002" 00:24:42.114 ], 00:24:42.114 "product_name": "passthru", 00:24:42.114 "block_size": 4096, 00:24:42.114 "num_blocks": 8192, 00:24:42.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:42.114 "assigned_rate_limits": { 00:24:42.114 "rw_ios_per_sec": 0, 00:24:42.114 "rw_mbytes_per_sec": 0, 00:24:42.114 "r_mbytes_per_sec": 0, 00:24:42.114 "w_mbytes_per_sec": 0 00:24:42.114 }, 00:24:42.114 "claimed": true, 00:24:42.114 "claim_type": "exclusive_write", 00:24:42.114 "zoned": false, 00:24:42.114 "supported_io_types": { 00:24:42.114 "read": true, 00:24:42.114 "write": true, 00:24:42.114 "unmap": true, 00:24:42.114 "flush": true, 00:24:42.114 "reset": true, 00:24:42.114 "nvme_admin": false, 00:24:42.114 "nvme_io": false, 00:24:42.114 "nvme_io_md": false, 00:24:42.114 "write_zeroes": true, 00:24:42.114 "zcopy": true, 00:24:42.114 "get_zone_info": false, 00:24:42.114 "zone_management": false, 00:24:42.114 "zone_append": false, 00:24:42.114 "compare": false, 00:24:42.114 "compare_and_write": false, 00:24:42.114 "abort": true, 00:24:42.114 "seek_hole": false, 00:24:42.114 "seek_data": false, 00:24:42.114 "copy": true, 00:24:42.114 "nvme_iov_md": false 00:24:42.114 }, 00:24:42.114 "memory_domains": [ 00:24:42.114 { 00:24:42.114 "dma_device_id": "system", 00:24:42.114 "dma_device_type": 1 00:24:42.114 }, 00:24:42.114 { 00:24:42.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.114 "dma_device_type": 2 00:24:42.114 } 00:24:42.114 ], 00:24:42.114 "driver_specific": { 00:24:42.114 "passthru": { 00:24:42.114 "name": "pt2", 00:24:42.114 "base_bdev_name": "malloc2" 00:24:42.114 } 00:24:42.114 } 00:24:42.114 }' 00:24:42.114 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:42.114 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:42.114 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:24:42.114 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.376 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:42.376 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:42.376 06:19:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.376 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.376 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:42.376 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.376 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:24:42.635 [2024-08-13 06:19:44.374750] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 57af8c1e-2795-47e8-9445-bc43af56e65a '!=' 57af8c1e-2795-47e8-9445-bc43af56e65a ']' 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:24:42.635 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:42.894 [2024-08-13 06:19:44.562288] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.894 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.154 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:43.154 "name": "raid_bdev1", 00:24:43.154 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:43.154 "strip_size_kb": 0, 00:24:43.154 "state": "online", 00:24:43.154 "raid_level": "raid1", 00:24:43.154 "superblock": true, 00:24:43.154 "num_base_bdevs": 2, 00:24:43.154 "num_base_bdevs_discovered": 1, 00:24:43.154 "num_base_bdevs_operational": 1, 00:24:43.154 "base_bdevs_list": [ 00:24:43.154 { 00:24:43.154 "name": null, 00:24:43.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.154 "is_configured": false, 00:24:43.154 "data_offset": 256, 00:24:43.154 "data_size": 7936 00:24:43.154 }, 00:24:43.154 { 00:24:43.154 "name": "pt2", 00:24:43.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:43.154 "is_configured": true, 00:24:43.154 "data_offset": 256, 00:24:43.154 "data_size": 7936 00:24:43.154 } 00:24:43.154 ] 00:24:43.154 }' 00:24:43.154 06:19:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:43.154 06:19:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.722 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:43.722 [2024-08-13 06:19:45.460722] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:43.722 [2024-08-13 06:19:45.460834] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:43.722 [2024-08-13 06:19:45.460934] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:43.722 [2024-08-13 06:19:45.460996] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:43.722 [2024-08-13 06:19:45.461054] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:24:43.722 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.722 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:24:43.980 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:24:43.980 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:24:43.980 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:43.981 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:43.981 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:44.240 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:44.240 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:44.240 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:24:44.240 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:44.240 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:24:44.240 06:19:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:44.500 [2024-08-13 06:19:46.035668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:44.500 [2024-08-13 06:19:46.035799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.500 [2024-08-13 06:19:46.035836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:44.500 [2024-08-13 06:19:46.035866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.500 [2024-08-13 06:19:46.037904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.500 [2024-08-13 06:19:46.037982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:44.500 [2024-08-13 06:19:46.038072] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:44.500 [2024-08-13 06:19:46.038113] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:44.500 [2024-08-13 06:19:46.038195] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:24:44.500 [2024-08-13 06:19:46.038205] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:44.500 [2024-08-13 06:19:46.038475] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:24:44.500 [2024-08-13 06:19:46.038613] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:24:44.500 [2024-08-13 06:19:46.038623] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:24:44.500 [2024-08-13 06:19:46.038731] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.500 pt2 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.500 "name": "raid_bdev1", 00:24:44.500 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:44.500 "strip_size_kb": 0, 00:24:44.500 "state": "online", 00:24:44.500 "raid_level": "raid1", 00:24:44.500 "superblock": true, 00:24:44.500 "num_base_bdevs": 2, 00:24:44.500 "num_base_bdevs_discovered": 1, 00:24:44.500 "num_base_bdevs_operational": 1, 00:24:44.500 "base_bdevs_list": [ 00:24:44.500 { 00:24:44.500 "name": null, 00:24:44.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.500 "is_configured": false, 00:24:44.500 "data_offset": 256, 00:24:44.500 "data_size": 7936 00:24:44.500 }, 00:24:44.500 { 00:24:44.500 "name": "pt2", 00:24:44.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:44.500 "is_configured": true, 00:24:44.500 "data_offset": 256, 00:24:44.500 "data_size": 7936 00:24:44.500 } 00:24:44.500 ] 00:24:44.500 }' 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.500 06:19:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.069 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:45.330 [2024-08-13 06:19:46.894304] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.330 [2024-08-13 06:19:46.894409] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.330 [2024-08-13 06:19:46.894480] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.330 [2024-08-13 06:19:46.894528] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.330 [2024-08-13 06:19:46.894538] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:24:45.330 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.330 06:19:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:24:45.330 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:24:45.330 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:24:45.330 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:24:45.330 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:45.590 [2024-08-13 06:19:47.265712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:45.590 [2024-08-13 06:19:47.265782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.590 [2024-08-13 06:19:47.265801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:45.590 [2024-08-13 06:19:47.265811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.590 [2024-08-13 06:19:47.267879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.590 [2024-08-13 06:19:47.267921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:45.590 [2024-08-13 06:19:47.267997] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:45.590 [2024-08-13 06:19:47.268047] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:45.590 [2024-08-13 06:19:47.268162] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:45.590 [2024-08-13 06:19:47.268172] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.590 [2024-08-13 06:19:47.268193] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:24:45.590 [2024-08-13 06:19:47.268226] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:45.590 [2024-08-13 06:19:47.268300] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:24:45.590 [2024-08-13 06:19:47.268309] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:45.590 [2024-08-13 06:19:47.268521] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:45.590 [2024-08-13 06:19:47.268636] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:24:45.590 [2024-08-13 06:19:47.268648] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:24:45.590 [2024-08-13 06:19:47.268740] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.590 pt1 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.590 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.850 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.850 "name": "raid_bdev1", 00:24:45.850 "uuid": "57af8c1e-2795-47e8-9445-bc43af56e65a", 00:24:45.850 "strip_size_kb": 0, 00:24:45.850 "state": "online", 00:24:45.850 "raid_level": "raid1", 00:24:45.850 "superblock": true, 00:24:45.850 "num_base_bdevs": 2, 00:24:45.850 "num_base_bdevs_discovered": 1, 00:24:45.850 "num_base_bdevs_operational": 1, 00:24:45.850 "base_bdevs_list": [ 00:24:45.850 { 00:24:45.850 "name": null, 00:24:45.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.850 "is_configured": false, 00:24:45.850 "data_offset": 256, 00:24:45.850 "data_size": 7936 00:24:45.850 }, 00:24:45.850 { 00:24:45.850 "name": "pt2", 00:24:45.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:45.850 "is_configured": true, 00:24:45.850 "data_offset": 256, 00:24:45.850 "data_size": 7936 00:24:45.850 } 00:24:45.850 ] 00:24:45.850 }' 00:24:45.850 06:19:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.850 06:19:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.418 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:46.418 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:24:46.418 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:24:46.418 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:46.418 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:24:46.677 [2024-08-13 06:19:48.367966] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 57af8c1e-2795-47e8-9445-bc43af56e65a '!=' 57af8c1e-2795-47e8-9445-bc43af56e65a ']' 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 105566 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 105566 ']' 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 105566 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105566 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:46.677 killing process with pid 105566 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105566' 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 105566 00:24:46.677 [2024-08-13 06:19:48.446694] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.677 [2024-08-13 06:19:48.446773] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.677 [2024-08-13 06:19:48.446812] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.677 [2024-08-13 06:19:48.446823] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:24:46.677 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 105566 00:24:46.937 [2024-08-13 06:19:48.469566] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:46.937 06:19:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:24:46.937 00:24:46.937 real 0m13.442s 00:24:46.937 user 0m24.455s 00:24:46.937 sys 0m2.351s 00:24:46.937 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:46.937 06:19:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.937 ************************************ 00:24:46.937 END TEST raid_superblock_test_4k 00:24:46.937 ************************************ 00:24:47.196 06:19:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # '[' true = true ']' 00:24:47.196 06:19:48 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:47.196 06:19:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:24:47.196 06:19:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:47.196 06:19:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:47.196 ************************************ 00:24:47.196 START TEST raid_rebuild_test_sb_4k 00:24:47.196 ************************************ 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=106039 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 106039 /var/tmp/spdk-raid.sock 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 106039 ']' 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:47.196 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:47.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:47.197 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:47.197 06:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.197 [2024-08-13 06:19:48.903163] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:24:47.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:47.197 Zero copy mechanism will not be used. 00:24:47.197 [2024-08-13 06:19:48.903376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106039 ] 00:24:47.456 [2024-08-13 06:19:49.051586] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.456 [2024-08-13 06:19:49.097724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.456 [2024-08-13 06:19:49.140671] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:47.456 [2024-08-13 06:19:49.140706] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.024 06:19:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:48.024 06:19:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:24:48.024 06:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:48.024 06:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:48.283 BaseBdev1_malloc 00:24:48.283 06:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:48.283 [2024-08-13 06:19:50.048908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:48.283 [2024-08-13 06:19:50.048974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.283 [2024-08-13 06:19:50.049001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:48.283 [2024-08-13 06:19:50.049018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.283 [2024-08-13 06:19:50.050968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.283 [2024-08-13 06:19:50.051013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:48.283 BaseBdev1 00:24:48.542 06:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:48.542 06:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:48.542 BaseBdev2_malloc 00:24:48.542 06:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:48.801 [2024-08-13 06:19:50.424734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:48.801 [2024-08-13 06:19:50.424801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.801 [2024-08-13 06:19:50.424821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:48.801 [2024-08-13 06:19:50.424832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.801 [2024-08-13 06:19:50.426777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.801 [2024-08-13 06:19:50.426821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:48.801 BaseBdev2 00:24:48.801 06:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:24:49.060 spare_malloc 00:24:49.060 06:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:49.060 spare_delay 00:24:49.327 06:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:49.327 [2024-08-13 06:19:51.039235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:49.328 [2024-08-13 06:19:51.039309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.328 [2024-08-13 06:19:51.039335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:49.328 [2024-08-13 06:19:51.039345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.328 [2024-08-13 06:19:51.041313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.328 [2024-08-13 06:19:51.041428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:49.328 spare 00:24:49.328 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:49.605 [2024-08-13 06:19:51.222968] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.605 [2024-08-13 06:19:51.224735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:49.605 [2024-08-13 06:19:51.224897] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:24:49.605 [2024-08-13 06:19:51.224914] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:49.605 [2024-08-13 06:19:51.225184] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:49.605 [2024-08-13 06:19:51.225337] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:24:49.605 [2024-08-13 06:19:51.225347] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:24:49.605 [2024-08-13 06:19:51.225484] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.605 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.877 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:49.877 "name": "raid_bdev1", 00:24:49.877 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:49.877 "strip_size_kb": 0, 00:24:49.877 "state": "online", 00:24:49.877 "raid_level": "raid1", 00:24:49.877 "superblock": true, 00:24:49.877 "num_base_bdevs": 2, 00:24:49.877 "num_base_bdevs_discovered": 2, 00:24:49.877 "num_base_bdevs_operational": 2, 00:24:49.877 "base_bdevs_list": [ 00:24:49.877 { 00:24:49.877 "name": "BaseBdev1", 00:24:49.877 "uuid": "68315557-367f-5880-8c6d-3974e32fe1a6", 00:24:49.877 "is_configured": true, 00:24:49.877 "data_offset": 256, 00:24:49.877 "data_size": 7936 00:24:49.877 }, 00:24:49.877 { 00:24:49.877 "name": "BaseBdev2", 00:24:49.877 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:49.877 "is_configured": true, 00:24:49.877 "data_offset": 256, 00:24:49.877 "data_size": 7936 00:24:49.877 } 00:24:49.877 ] 00:24:49.877 }' 00:24:49.877 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:49.877 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:50.452 06:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:50.452 [2024-08-13 06:19:52.153881] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.452 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:24:50.452 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.452 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.711 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:50.971 [2024-08-13 06:19:52.521078] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:50.971 /dev/nbd0 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.971 1+0 records in 00:24:50.971 1+0 records out 00:24:50.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053959 s, 7.6 MB/s 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:50.971 06:19:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:51.541 7936+0 records in 00:24:51.541 7936+0 records out 00:24:51.541 32505856 bytes (33 MB, 31 MiB) copied, 0.615622 s, 52.8 MB/s 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.541 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:51.801 [2024-08-13 06:19:53.416336] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.801 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:52.061 [2024-08-13 06:19:53.597387] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.061 "name": "raid_bdev1", 00:24:52.061 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:52.061 "strip_size_kb": 0, 00:24:52.061 "state": "online", 00:24:52.061 "raid_level": "raid1", 00:24:52.061 "superblock": true, 00:24:52.061 "num_base_bdevs": 2, 00:24:52.061 "num_base_bdevs_discovered": 1, 00:24:52.061 "num_base_bdevs_operational": 1, 00:24:52.061 "base_bdevs_list": [ 00:24:52.061 { 00:24:52.061 "name": null, 00:24:52.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.061 "is_configured": false, 00:24:52.061 "data_offset": 256, 00:24:52.061 "data_size": 7936 00:24:52.061 }, 00:24:52.061 { 00:24:52.061 "name": "BaseBdev2", 00:24:52.061 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:52.061 "is_configured": true, 00:24:52.061 "data_offset": 256, 00:24:52.061 "data_size": 7936 00:24:52.061 } 00:24:52.061 ] 00:24:52.061 }' 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.061 06:19:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.629 06:19:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:52.889 [2024-08-13 06:19:54.531807] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.889 [2024-08-13 06:19:54.536040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:24:52.889 [2024-08-13 06:19:54.537830] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.889 06:19:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.828 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.088 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:54.088 "name": "raid_bdev1", 00:24:54.088 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:54.088 "strip_size_kb": 0, 00:24:54.088 "state": "online", 00:24:54.088 "raid_level": "raid1", 00:24:54.088 "superblock": true, 00:24:54.088 "num_base_bdevs": 2, 00:24:54.088 "num_base_bdevs_discovered": 2, 00:24:54.088 "num_base_bdevs_operational": 2, 00:24:54.088 "process": { 00:24:54.088 "type": "rebuild", 00:24:54.088 "target": "spare", 00:24:54.088 "progress": { 00:24:54.088 "blocks": 3072, 00:24:54.088 "percent": 38 00:24:54.088 } 00:24:54.088 }, 00:24:54.088 "base_bdevs_list": [ 00:24:54.088 { 00:24:54.088 "name": "spare", 00:24:54.088 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:24:54.088 "is_configured": true, 00:24:54.088 "data_offset": 256, 00:24:54.088 "data_size": 7936 00:24:54.088 }, 00:24:54.088 { 00:24:54.088 "name": "BaseBdev2", 00:24:54.088 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:54.088 "is_configured": true, 00:24:54.088 "data_offset": 256, 00:24:54.088 "data_size": 7936 00:24:54.088 } 00:24:54.088 ] 00:24:54.088 }' 00:24:54.088 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:54.088 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.088 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:54.088 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.088 06:19:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:54.348 [2024-08-13 06:19:56.026390] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.348 [2024-08-13 06:19:56.043544] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:54.348 [2024-08-13 06:19:56.043607] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.348 [2024-08-13 06:19:56.043623] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.348 [2024-08-13 06:19:56.043644] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.348 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.608 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.608 "name": "raid_bdev1", 00:24:54.608 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:54.608 "strip_size_kb": 0, 00:24:54.608 "state": "online", 00:24:54.608 "raid_level": "raid1", 00:24:54.608 "superblock": true, 00:24:54.608 "num_base_bdevs": 2, 00:24:54.608 "num_base_bdevs_discovered": 1, 00:24:54.608 "num_base_bdevs_operational": 1, 00:24:54.608 "base_bdevs_list": [ 00:24:54.608 { 00:24:54.608 "name": null, 00:24:54.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.608 "is_configured": false, 00:24:54.608 "data_offset": 256, 00:24:54.608 "data_size": 7936 00:24:54.608 }, 00:24:54.608 { 00:24:54.608 "name": "BaseBdev2", 00:24:54.608 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:54.608 "is_configured": true, 00:24:54.608 "data_offset": 256, 00:24:54.608 "data_size": 7936 00:24:54.608 } 00:24:54.608 ] 00:24:54.608 }' 00:24:54.608 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.608 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:55.177 "name": "raid_bdev1", 00:24:55.177 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:55.177 "strip_size_kb": 0, 00:24:55.177 "state": "online", 00:24:55.177 "raid_level": "raid1", 00:24:55.177 "superblock": true, 00:24:55.177 "num_base_bdevs": 2, 00:24:55.177 "num_base_bdevs_discovered": 1, 00:24:55.177 "num_base_bdevs_operational": 1, 00:24:55.177 "base_bdevs_list": [ 00:24:55.177 { 00:24:55.177 "name": null, 00:24:55.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.177 "is_configured": false, 00:24:55.177 "data_offset": 256, 00:24:55.177 "data_size": 7936 00:24:55.177 }, 00:24:55.177 { 00:24:55.177 "name": "BaseBdev2", 00:24:55.177 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:55.177 "is_configured": true, 00:24:55.177 "data_offset": 256, 00:24:55.177 "data_size": 7936 00:24:55.177 } 00:24:55.177 ] 00:24:55.177 }' 00:24:55.177 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:55.437 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:55.437 06:19:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:55.437 06:19:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:55.437 06:19:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:55.437 [2024-08-13 06:19:57.197672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.437 [2024-08-13 06:19:57.201663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:24:55.437 [2024-08-13 06:19:57.203426] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:55.437 06:19:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:56.817 "name": "raid_bdev1", 00:24:56.817 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:56.817 "strip_size_kb": 0, 00:24:56.817 "state": "online", 00:24:56.817 "raid_level": "raid1", 00:24:56.817 "superblock": true, 00:24:56.817 "num_base_bdevs": 2, 00:24:56.817 "num_base_bdevs_discovered": 2, 00:24:56.817 "num_base_bdevs_operational": 2, 00:24:56.817 "process": { 00:24:56.817 "type": "rebuild", 00:24:56.817 "target": "spare", 00:24:56.817 "progress": { 00:24:56.817 "blocks": 3072, 00:24:56.817 "percent": 38 00:24:56.817 } 00:24:56.817 }, 00:24:56.817 "base_bdevs_list": [ 00:24:56.817 { 00:24:56.817 "name": "spare", 00:24:56.817 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:24:56.817 "is_configured": true, 00:24:56.817 "data_offset": 256, 00:24:56.817 "data_size": 7936 00:24:56.817 }, 00:24:56.817 { 00:24:56.817 "name": "BaseBdev2", 00:24:56.817 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:56.817 "is_configured": true, 00:24:56.817 "data_offset": 256, 00:24:56.817 "data_size": 7936 00:24:56.817 } 00:24:56.817 ] 00:24:56.817 }' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:24:56.817 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1150 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.817 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.077 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.077 "name": "raid_bdev1", 00:24:57.077 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:57.077 "strip_size_kb": 0, 00:24:57.077 "state": "online", 00:24:57.077 "raid_level": "raid1", 00:24:57.077 "superblock": true, 00:24:57.077 "num_base_bdevs": 2, 00:24:57.077 "num_base_bdevs_discovered": 2, 00:24:57.077 "num_base_bdevs_operational": 2, 00:24:57.077 "process": { 00:24:57.077 "type": "rebuild", 00:24:57.077 "target": "spare", 00:24:57.077 "progress": { 00:24:57.077 "blocks": 3840, 00:24:57.077 "percent": 48 00:24:57.077 } 00:24:57.077 }, 00:24:57.077 "base_bdevs_list": [ 00:24:57.077 { 00:24:57.077 "name": "spare", 00:24:57.077 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:24:57.077 "is_configured": true, 00:24:57.077 "data_offset": 256, 00:24:57.077 "data_size": 7936 00:24:57.077 }, 00:24:57.077 { 00:24:57.077 "name": "BaseBdev2", 00:24:57.077 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:57.077 "is_configured": true, 00:24:57.077 "data_offset": 256, 00:24:57.077 "data_size": 7936 00:24:57.077 } 00:24:57.077 ] 00:24:57.077 }' 00:24:57.077 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.077 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.077 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.077 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.077 06:19:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.457 06:19:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.457 06:20:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:58.457 "name": "raid_bdev1", 00:24:58.457 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:58.457 "strip_size_kb": 0, 00:24:58.457 "state": "online", 00:24:58.457 "raid_level": "raid1", 00:24:58.457 "superblock": true, 00:24:58.457 "num_base_bdevs": 2, 00:24:58.457 "num_base_bdevs_discovered": 2, 00:24:58.457 "num_base_bdevs_operational": 2, 00:24:58.457 "process": { 00:24:58.457 "type": "rebuild", 00:24:58.457 "target": "spare", 00:24:58.457 "progress": { 00:24:58.457 "blocks": 7168, 00:24:58.457 "percent": 90 00:24:58.457 } 00:24:58.457 }, 00:24:58.457 "base_bdevs_list": [ 00:24:58.457 { 00:24:58.457 "name": "spare", 00:24:58.457 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:24:58.457 "is_configured": true, 00:24:58.457 "data_offset": 256, 00:24:58.457 "data_size": 7936 00:24:58.457 }, 00:24:58.457 { 00:24:58.457 "name": "BaseBdev2", 00:24:58.457 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:58.457 "is_configured": true, 00:24:58.457 "data_offset": 256, 00:24:58.457 "data_size": 7936 00:24:58.457 } 00:24:58.457 ] 00:24:58.457 }' 00:24:58.457 06:20:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:58.457 06:20:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.457 06:20:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:58.457 06:20:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.457 06:20:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:58.717 [2024-08-13 06:20:00.313077] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:58.717 [2024-08-13 06:20:00.313151] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:58.717 [2024-08-13 06:20:00.313235] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.654 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:59.654 "name": "raid_bdev1", 00:24:59.654 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:59.654 "strip_size_kb": 0, 00:24:59.654 "state": "online", 00:24:59.654 "raid_level": "raid1", 00:24:59.654 "superblock": true, 00:24:59.654 "num_base_bdevs": 2, 00:24:59.654 "num_base_bdevs_discovered": 2, 00:24:59.654 "num_base_bdevs_operational": 2, 00:24:59.655 "base_bdevs_list": [ 00:24:59.655 { 00:24:59.655 "name": "spare", 00:24:59.655 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:24:59.655 "is_configured": true, 00:24:59.655 "data_offset": 256, 00:24:59.655 "data_size": 7936 00:24:59.655 }, 00:24:59.655 { 00:24:59.655 "name": "BaseBdev2", 00:24:59.655 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:59.655 "is_configured": true, 00:24:59.655 "data_offset": 256, 00:24:59.655 "data_size": 7936 00:24:59.655 } 00:24:59.655 ] 00:24:59.655 }' 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:59.655 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:59.914 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.914 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.914 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:59.914 "name": "raid_bdev1", 00:24:59.914 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:24:59.914 "strip_size_kb": 0, 00:24:59.914 "state": "online", 00:24:59.914 "raid_level": "raid1", 00:24:59.914 "superblock": true, 00:24:59.914 "num_base_bdevs": 2, 00:24:59.914 "num_base_bdevs_discovered": 2, 00:24:59.914 "num_base_bdevs_operational": 2, 00:24:59.914 "base_bdevs_list": [ 00:24:59.914 { 00:24:59.914 "name": "spare", 00:24:59.914 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:24:59.914 "is_configured": true, 00:24:59.914 "data_offset": 256, 00:24:59.914 "data_size": 7936 00:24:59.914 }, 00:24:59.914 { 00:24:59.914 "name": "BaseBdev2", 00:24:59.914 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:24:59.914 "is_configured": true, 00:24:59.914 "data_offset": 256, 00:24:59.914 "data_size": 7936 00:24:59.914 } 00:24:59.914 ] 00:24:59.914 }' 00:24:59.914 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:59.914 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:59.914 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.174 "name": "raid_bdev1", 00:25:00.174 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:00.174 "strip_size_kb": 0, 00:25:00.174 "state": "online", 00:25:00.174 "raid_level": "raid1", 00:25:00.174 "superblock": true, 00:25:00.174 "num_base_bdevs": 2, 00:25:00.174 "num_base_bdevs_discovered": 2, 00:25:00.174 "num_base_bdevs_operational": 2, 00:25:00.174 "base_bdevs_list": [ 00:25:00.174 { 00:25:00.174 "name": "spare", 00:25:00.174 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:25:00.174 "is_configured": true, 00:25:00.174 "data_offset": 256, 00:25:00.174 "data_size": 7936 00:25:00.174 }, 00:25:00.174 { 00:25:00.174 "name": "BaseBdev2", 00:25:00.174 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:00.174 "is_configured": true, 00:25:00.174 "data_offset": 256, 00:25:00.174 "data_size": 7936 00:25:00.174 } 00:25:00.174 ] 00:25:00.174 }' 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.174 06:20:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.743 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:01.003 [2024-08-13 06:20:02.673183] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:01.003 [2024-08-13 06:20:02.673213] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:01.003 [2024-08-13 06:20:02.673284] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.003 [2024-08-13 06:20:02.673343] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.003 [2024-08-13 06:20:02.673352] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:25:01.003 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:25:01.003 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:01.262 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:01.263 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:01.263 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:01.263 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:01.263 06:20:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:01.525 /dev/nbd0 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:01.526 1+0 records in 00:25:01.526 1+0 records out 00:25:01.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034362 s, 11.9 MB/s 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:01.526 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:01.786 /dev/nbd1 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:01.786 1+0 records in 00:25:01.786 1+0 records out 00:25:01.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342212 s, 12.0 MB/s 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:01.786 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:02.046 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:02.305 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:02.306 06:20:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:02.306 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:02.565 [2024-08-13 06:20:04.262364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:02.565 [2024-08-13 06:20:04.262416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.565 [2024-08-13 06:20:04.262436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:02.565 [2024-08-13 06:20:04.262445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.565 [2024-08-13 06:20:04.264425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.565 [2024-08-13 06:20:04.264464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:02.565 [2024-08-13 06:20:04.264534] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:02.565 [2024-08-13 06:20:04.264565] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:02.565 [2024-08-13 06:20:04.264682] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:02.565 spare 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.565 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.825 [2024-08-13 06:20:04.364573] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:25:02.825 [2024-08-13 06:20:04.364609] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:02.825 [2024-08-13 06:20:04.364851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:25:02.825 [2024-08-13 06:20:04.364983] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:25:02.825 [2024-08-13 06:20:04.364996] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:25:02.825 [2024-08-13 06:20:04.365117] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.825 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.825 "name": "raid_bdev1", 00:25:02.825 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:02.825 "strip_size_kb": 0, 00:25:02.825 "state": "online", 00:25:02.825 "raid_level": "raid1", 00:25:02.825 "superblock": true, 00:25:02.825 "num_base_bdevs": 2, 00:25:02.825 "num_base_bdevs_discovered": 2, 00:25:02.825 "num_base_bdevs_operational": 2, 00:25:02.825 "base_bdevs_list": [ 00:25:02.825 { 00:25:02.825 "name": "spare", 00:25:02.825 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:25:02.825 "is_configured": true, 00:25:02.825 "data_offset": 256, 00:25:02.825 "data_size": 7936 00:25:02.825 }, 00:25:02.825 { 00:25:02.825 "name": "BaseBdev2", 00:25:02.825 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:02.825 "is_configured": true, 00:25:02.825 "data_offset": 256, 00:25:02.825 "data_size": 7936 00:25:02.825 } 00:25:02.825 ] 00:25:02.825 }' 00:25:02.825 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.825 06:20:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.395 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.395 "name": "raid_bdev1", 00:25:03.395 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:03.395 "strip_size_kb": 0, 00:25:03.395 "state": "online", 00:25:03.396 "raid_level": "raid1", 00:25:03.396 "superblock": true, 00:25:03.396 "num_base_bdevs": 2, 00:25:03.396 "num_base_bdevs_discovered": 2, 00:25:03.396 "num_base_bdevs_operational": 2, 00:25:03.396 "base_bdevs_list": [ 00:25:03.396 { 00:25:03.396 "name": "spare", 00:25:03.396 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:25:03.396 "is_configured": true, 00:25:03.396 "data_offset": 256, 00:25:03.396 "data_size": 7936 00:25:03.396 }, 00:25:03.396 { 00:25:03.396 "name": "BaseBdev2", 00:25:03.396 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:03.396 "is_configured": true, 00:25:03.396 "data_offset": 256, 00:25:03.396 "data_size": 7936 00:25:03.396 } 00:25:03.396 ] 00:25:03.396 }' 00:25:03.396 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.655 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:03.655 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.655 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:03.655 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:03.655 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:03.915 [2024-08-13 06:20:05.640289] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.915 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.175 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.175 "name": "raid_bdev1", 00:25:04.175 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:04.175 "strip_size_kb": 0, 00:25:04.175 "state": "online", 00:25:04.175 "raid_level": "raid1", 00:25:04.175 "superblock": true, 00:25:04.175 "num_base_bdevs": 2, 00:25:04.175 "num_base_bdevs_discovered": 1, 00:25:04.175 "num_base_bdevs_operational": 1, 00:25:04.175 "base_bdevs_list": [ 00:25:04.175 { 00:25:04.175 "name": null, 00:25:04.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.175 "is_configured": false, 00:25:04.175 "data_offset": 256, 00:25:04.175 "data_size": 7936 00:25:04.175 }, 00:25:04.175 { 00:25:04.175 "name": "BaseBdev2", 00:25:04.175 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:04.175 "is_configured": true, 00:25:04.175 "data_offset": 256, 00:25:04.175 "data_size": 7936 00:25:04.175 } 00:25:04.175 ] 00:25:04.175 }' 00:25:04.175 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.175 06:20:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:04.744 06:20:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:05.004 [2024-08-13 06:20:06.538811] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.004 [2024-08-13 06:20:06.539015] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:05.004 [2024-08-13 06:20:06.539100] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:05.004 [2024-08-13 06:20:06.539160] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.004 [2024-08-13 06:20:06.543203] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:25:05.004 [2024-08-13 06:20:06.544940] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:05.004 06:20:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.943 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.202 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:06.202 "name": "raid_bdev1", 00:25:06.202 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:06.202 "strip_size_kb": 0, 00:25:06.202 "state": "online", 00:25:06.202 "raid_level": "raid1", 00:25:06.202 "superblock": true, 00:25:06.202 "num_base_bdevs": 2, 00:25:06.202 "num_base_bdevs_discovered": 2, 00:25:06.202 "num_base_bdevs_operational": 2, 00:25:06.202 "process": { 00:25:06.202 "type": "rebuild", 00:25:06.202 "target": "spare", 00:25:06.202 "progress": { 00:25:06.202 "blocks": 2816, 00:25:06.202 "percent": 35 00:25:06.202 } 00:25:06.202 }, 00:25:06.202 "base_bdevs_list": [ 00:25:06.202 { 00:25:06.202 "name": "spare", 00:25:06.202 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:25:06.202 "is_configured": true, 00:25:06.202 "data_offset": 256, 00:25:06.202 "data_size": 7936 00:25:06.202 }, 00:25:06.202 { 00:25:06.202 "name": "BaseBdev2", 00:25:06.202 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:06.202 "is_configured": true, 00:25:06.202 "data_offset": 256, 00:25:06.202 "data_size": 7936 00:25:06.202 } 00:25:06.202 ] 00:25:06.202 }' 00:25:06.202 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:06.202 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.202 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:06.202 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.202 06:20:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:06.462 [2024-08-13 06:20:08.037896] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:06.462 [2024-08-13 06:20:08.049802] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:06.462 [2024-08-13 06:20:08.049912] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.462 [2024-08-13 06:20:08.049928] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:06.462 [2024-08-13 06:20:08.049937] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.462 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.722 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.722 "name": "raid_bdev1", 00:25:06.722 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:06.722 "strip_size_kb": 0, 00:25:06.722 "state": "online", 00:25:06.722 "raid_level": "raid1", 00:25:06.722 "superblock": true, 00:25:06.722 "num_base_bdevs": 2, 00:25:06.722 "num_base_bdevs_discovered": 1, 00:25:06.722 "num_base_bdevs_operational": 1, 00:25:06.722 "base_bdevs_list": [ 00:25:06.722 { 00:25:06.722 "name": null, 00:25:06.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.722 "is_configured": false, 00:25:06.722 "data_offset": 256, 00:25:06.722 "data_size": 7936 00:25:06.722 }, 00:25:06.722 { 00:25:06.722 "name": "BaseBdev2", 00:25:06.722 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:06.722 "is_configured": true, 00:25:06.722 "data_offset": 256, 00:25:06.722 "data_size": 7936 00:25:06.722 } 00:25:06.722 ] 00:25:06.722 }' 00:25:06.722 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.722 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.291 06:20:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:07.291 [2024-08-13 06:20:09.000231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:07.291 [2024-08-13 06:20:09.000330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.291 [2024-08-13 06:20:09.000355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:07.291 [2024-08-13 06:20:09.000365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.291 [2024-08-13 06:20:09.000753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.291 [2024-08-13 06:20:09.000771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:07.291 [2024-08-13 06:20:09.000843] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:07.291 [2024-08-13 06:20:09.000855] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:07.291 [2024-08-13 06:20:09.000865] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:07.291 [2024-08-13 06:20:09.000888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:07.291 [2024-08-13 06:20:09.004881] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:25:07.291 spare 00:25:07.291 [2024-08-13 06:20:09.006603] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:07.291 06:20:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:08.230 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:08.230 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:08.230 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.490 "name": "raid_bdev1", 00:25:08.490 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:08.490 "strip_size_kb": 0, 00:25:08.490 "state": "online", 00:25:08.490 "raid_level": "raid1", 00:25:08.490 "superblock": true, 00:25:08.490 "num_base_bdevs": 2, 00:25:08.490 "num_base_bdevs_discovered": 2, 00:25:08.490 "num_base_bdevs_operational": 2, 00:25:08.490 "process": { 00:25:08.490 "type": "rebuild", 00:25:08.490 "target": "spare", 00:25:08.490 "progress": { 00:25:08.490 "blocks": 2816, 00:25:08.490 "percent": 35 00:25:08.490 } 00:25:08.490 }, 00:25:08.490 "base_bdevs_list": [ 00:25:08.490 { 00:25:08.490 "name": "spare", 00:25:08.490 "uuid": "989ae785-4f28-5e1a-ae13-ea1a9bf3b798", 00:25:08.490 "is_configured": true, 00:25:08.490 "data_offset": 256, 00:25:08.490 "data_size": 7936 00:25:08.490 }, 00:25:08.490 { 00:25:08.490 "name": "BaseBdev2", 00:25:08.490 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:08.490 "is_configured": true, 00:25:08.490 "data_offset": 256, 00:25:08.490 "data_size": 7936 00:25:08.490 } 00:25:08.490 ] 00:25:08.490 }' 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:08.490 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:08.750 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.750 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:08.750 [2024-08-13 06:20:10.482759] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:08.750 [2024-08-13 06:20:10.511605] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:08.750 [2024-08-13 06:20:10.511659] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.750 [2024-08-13 06:20:10.511675] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:08.750 [2024-08-13 06:20:10.511682] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.009 "name": "raid_bdev1", 00:25:09.009 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:09.009 "strip_size_kb": 0, 00:25:09.009 "state": "online", 00:25:09.009 "raid_level": "raid1", 00:25:09.009 "superblock": true, 00:25:09.009 "num_base_bdevs": 2, 00:25:09.009 "num_base_bdevs_discovered": 1, 00:25:09.009 "num_base_bdevs_operational": 1, 00:25:09.009 "base_bdevs_list": [ 00:25:09.009 { 00:25:09.009 "name": null, 00:25:09.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.009 "is_configured": false, 00:25:09.009 "data_offset": 256, 00:25:09.009 "data_size": 7936 00:25:09.009 }, 00:25:09.009 { 00:25:09.009 "name": "BaseBdev2", 00:25:09.009 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:09.009 "is_configured": true, 00:25:09.009 "data_offset": 256, 00:25:09.009 "data_size": 7936 00:25:09.009 } 00:25:09.009 ] 00:25:09.009 }' 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.009 06:20:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.577 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.836 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:09.836 "name": "raid_bdev1", 00:25:09.836 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:09.836 "strip_size_kb": 0, 00:25:09.836 "state": "online", 00:25:09.836 "raid_level": "raid1", 00:25:09.836 "superblock": true, 00:25:09.836 "num_base_bdevs": 2, 00:25:09.836 "num_base_bdevs_discovered": 1, 00:25:09.836 "num_base_bdevs_operational": 1, 00:25:09.836 "base_bdevs_list": [ 00:25:09.836 { 00:25:09.836 "name": null, 00:25:09.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.836 "is_configured": false, 00:25:09.836 "data_offset": 256, 00:25:09.836 "data_size": 7936 00:25:09.836 }, 00:25:09.836 { 00:25:09.836 "name": "BaseBdev2", 00:25:09.836 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:09.836 "is_configured": true, 00:25:09.836 "data_offset": 256, 00:25:09.836 "data_size": 7936 00:25:09.836 } 00:25:09.836 ] 00:25:09.836 }' 00:25:09.836 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:09.836 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:09.836 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:09.836 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:09.836 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:10.096 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:10.355 [2024-08-13 06:20:11.905467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:10.355 [2024-08-13 06:20:11.905578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.355 [2024-08-13 06:20:11.905620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:10.355 [2024-08-13 06:20:11.905647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.355 [2024-08-13 06:20:11.906051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.355 [2024-08-13 06:20:11.906108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:10.355 [2024-08-13 06:20:11.906212] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:10.355 [2024-08-13 06:20:11.906293] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:10.355 [2024-08-13 06:20:11.906341] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:10.355 BaseBdev1 00:25:10.355 06:20:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.293 06:20:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.552 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.552 "name": "raid_bdev1", 00:25:11.552 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:11.552 "strip_size_kb": 0, 00:25:11.552 "state": "online", 00:25:11.552 "raid_level": "raid1", 00:25:11.552 "superblock": true, 00:25:11.552 "num_base_bdevs": 2, 00:25:11.552 "num_base_bdevs_discovered": 1, 00:25:11.552 "num_base_bdevs_operational": 1, 00:25:11.552 "base_bdevs_list": [ 00:25:11.552 { 00:25:11.552 "name": null, 00:25:11.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.552 "is_configured": false, 00:25:11.552 "data_offset": 256, 00:25:11.552 "data_size": 7936 00:25:11.552 }, 00:25:11.552 { 00:25:11.552 "name": "BaseBdev2", 00:25:11.552 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:11.552 "is_configured": true, 00:25:11.552 "data_offset": 256, 00:25:11.552 "data_size": 7936 00:25:11.552 } 00:25:11.552 ] 00:25:11.552 }' 00:25:11.552 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.552 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.126 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:12.126 "name": "raid_bdev1", 00:25:12.126 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:12.126 "strip_size_kb": 0, 00:25:12.126 "state": "online", 00:25:12.126 "raid_level": "raid1", 00:25:12.126 "superblock": true, 00:25:12.126 "num_base_bdevs": 2, 00:25:12.126 "num_base_bdevs_discovered": 1, 00:25:12.126 "num_base_bdevs_operational": 1, 00:25:12.126 "base_bdevs_list": [ 00:25:12.126 { 00:25:12.126 "name": null, 00:25:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.126 "is_configured": false, 00:25:12.126 "data_offset": 256, 00:25:12.126 "data_size": 7936 00:25:12.126 }, 00:25:12.126 { 00:25:12.126 "name": "BaseBdev2", 00:25:12.127 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:12.127 "is_configured": true, 00:25:12.127 "data_offset": 256, 00:25:12.127 "data_size": 7936 00:25:12.127 } 00:25:12.127 ] 00:25:12.127 }' 00:25:12.127 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:12.127 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:12.127 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@646 -- # local es=0 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:12.395 06:20:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:12.395 [2024-08-13 06:20:14.137690] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:12.395 [2024-08-13 06:20:14.137888] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:12.395 [2024-08-13 06:20:14.137958] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:12.395 request: 00:25:12.395 { 00:25:12.395 "base_bdev": "BaseBdev1", 00:25:12.395 "raid_bdev": "raid_bdev1", 00:25:12.395 "method": "bdev_raid_add_base_bdev", 00:25:12.395 "req_id": 1 00:25:12.395 } 00:25:12.395 Got JSON-RPC error response 00:25:12.395 response: 00:25:12.396 { 00:25:12.396 "code": -22, 00:25:12.396 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:12.396 } 00:25:12.396 06:20:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@649 -- # es=1 00:25:12.396 06:20:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:25:12.396 06:20:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:25:12.396 06:20:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:25:12.396 06:20:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.781 "name": "raid_bdev1", 00:25:13.781 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:13.781 "strip_size_kb": 0, 00:25:13.781 "state": "online", 00:25:13.781 "raid_level": "raid1", 00:25:13.781 "superblock": true, 00:25:13.781 "num_base_bdevs": 2, 00:25:13.781 "num_base_bdevs_discovered": 1, 00:25:13.781 "num_base_bdevs_operational": 1, 00:25:13.781 "base_bdevs_list": [ 00:25:13.781 { 00:25:13.781 "name": null, 00:25:13.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.781 "is_configured": false, 00:25:13.781 "data_offset": 256, 00:25:13.781 "data_size": 7936 00:25:13.781 }, 00:25:13.781 { 00:25:13.781 "name": "BaseBdev2", 00:25:13.781 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:13.781 "is_configured": true, 00:25:13.781 "data_offset": 256, 00:25:13.781 "data_size": 7936 00:25:13.781 } 00:25:13.781 ] 00:25:13.781 }' 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.781 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.351 06:20:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.351 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.351 "name": "raid_bdev1", 00:25:14.351 "uuid": "fc8d61da-97e0-405c-822b-1ea95735db24", 00:25:14.351 "strip_size_kb": 0, 00:25:14.351 "state": "online", 00:25:14.351 "raid_level": "raid1", 00:25:14.351 "superblock": true, 00:25:14.351 "num_base_bdevs": 2, 00:25:14.351 "num_base_bdevs_discovered": 1, 00:25:14.351 "num_base_bdevs_operational": 1, 00:25:14.351 "base_bdevs_list": [ 00:25:14.351 { 00:25:14.351 "name": null, 00:25:14.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.351 "is_configured": false, 00:25:14.351 "data_offset": 256, 00:25:14.351 "data_size": 7936 00:25:14.351 }, 00:25:14.351 { 00:25:14.351 "name": "BaseBdev2", 00:25:14.351 "uuid": "83abfe44-2520-5be3-9dc9-7a15512dd25f", 00:25:14.351 "is_configured": true, 00:25:14.351 "data_offset": 256, 00:25:14.351 "data_size": 7936 00:25:14.351 } 00:25:14.351 ] 00:25:14.351 }' 00:25:14.351 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:14.351 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:14.351 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 106039 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 106039 ']' 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 106039 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106039 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106039' 00:25:14.611 killing process with pid 106039 00:25:14.611 Received shutdown signal, test time was about 60.000000 seconds 00:25:14.611 00:25:14.611 Latency(us) 00:25:14.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.611 =================================================================================================================== 00:25:14.611 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@965 -- # kill 106039 00:25:14.611 [2024-08-13 06:20:16.206511] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:14.611 [2024-08-13 06:20:16.206630] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.611 [2024-08-13 06:20:16.206678] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.611 [2024-08-13 06:20:16.206686] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:25:14.611 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # wait 106039 00:25:14.611 [2024-08-13 06:20:16.238020] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:14.871 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:25:14.871 00:25:14.872 real 0m27.664s 00:25:14.872 user 0m42.741s 00:25:14.872 sys 0m4.022s 00:25:14.872 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:14.872 06:20:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:14.872 ************************************ 00:25:14.872 END TEST raid_rebuild_test_sb_4k 00:25:14.872 ************************************ 00:25:14.872 06:20:16 bdev_raid -- bdev/bdev_raid.sh@982 -- # base_malloc_params='-m 32' 00:25:14.872 06:20:16 bdev_raid -- bdev/bdev_raid.sh@983 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:25:14.872 06:20:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:14.872 06:20:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:14.872 06:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:14.872 ************************************ 00:25:14.872 START TEST raid_state_function_test_sb_md_separate 00:25:14.872 ************************************ 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:14.872 Process raid pid: 106826 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=106826 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 106826' 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 106826 /var/tmp/spdk-raid.sock 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 106826 ']' 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:14.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:14.872 06:20:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.872 [2024-08-13 06:20:16.644232] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:25:14.872 [2024-08-13 06:20:16.644497] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.132 [2024-08-13 06:20:16.792411] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.132 [2024-08-13 06:20:16.836915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.132 [2024-08-13 06:20:16.879042] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.132 [2024-08-13 06:20:16.879153] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.699 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:15.699 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:25:15.699 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:15.958 [2024-08-13 06:20:17.606918] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:15.958 [2024-08-13 06:20:17.607001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:15.958 [2024-08-13 06:20:17.607038] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:15.958 [2024-08-13 06:20:17.607060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.958 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.267 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.267 "name": "Existed_Raid", 00:25:16.267 "uuid": "e24080ca-5c73-401c-8417-cc57d6c1e305", 00:25:16.267 "strip_size_kb": 0, 00:25:16.267 "state": "configuring", 00:25:16.267 "raid_level": "raid1", 00:25:16.267 "superblock": true, 00:25:16.267 "num_base_bdevs": 2, 00:25:16.267 "num_base_bdevs_discovered": 0, 00:25:16.267 "num_base_bdevs_operational": 2, 00:25:16.267 "base_bdevs_list": [ 00:25:16.267 { 00:25:16.267 "name": "BaseBdev1", 00:25:16.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.267 "is_configured": false, 00:25:16.267 "data_offset": 0, 00:25:16.267 "data_size": 0 00:25:16.267 }, 00:25:16.267 { 00:25:16.267 "name": "BaseBdev2", 00:25:16.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.267 "is_configured": false, 00:25:16.267 "data_offset": 0, 00:25:16.267 "data_size": 0 00:25:16.267 } 00:25:16.267 ] 00:25:16.267 }' 00:25:16.267 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.267 06:20:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.836 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:16.836 [2024-08-13 06:20:18.517300] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:16.836 [2024-08-13 06:20:18.517374] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:25:16.836 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:17.095 [2024-08-13 06:20:18.716993] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:17.095 [2024-08-13 06:20:18.717077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:17.095 [2024-08-13 06:20:18.717115] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:17.095 [2024-08-13 06:20:18.717135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:17.095 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:25:17.355 [2024-08-13 06:20:18.930064] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:17.355 BaseBdev1 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:17.355 06:20:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.615 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:17.615 [ 00:25:17.615 { 00:25:17.615 "name": "BaseBdev1", 00:25:17.615 "aliases": [ 00:25:17.615 "b9667196-419a-42d7-9640-39614733f89d" 00:25:17.615 ], 00:25:17.615 "product_name": "Malloc disk", 00:25:17.615 "block_size": 4096, 00:25:17.615 "num_blocks": 8192, 00:25:17.615 "uuid": "b9667196-419a-42d7-9640-39614733f89d", 00:25:17.615 "md_size": 32, 00:25:17.615 "md_interleave": false, 00:25:17.615 "dif_type": 0, 00:25:17.615 "assigned_rate_limits": { 00:25:17.615 "rw_ios_per_sec": 0, 00:25:17.615 "rw_mbytes_per_sec": 0, 00:25:17.615 "r_mbytes_per_sec": 0, 00:25:17.615 "w_mbytes_per_sec": 0 00:25:17.615 }, 00:25:17.615 "claimed": true, 00:25:17.615 "claim_type": "exclusive_write", 00:25:17.615 "zoned": false, 00:25:17.615 "supported_io_types": { 00:25:17.615 "read": true, 00:25:17.615 "write": true, 00:25:17.615 "unmap": true, 00:25:17.615 "flush": true, 00:25:17.615 "reset": true, 00:25:17.615 "nvme_admin": false, 00:25:17.615 "nvme_io": false, 00:25:17.615 "nvme_io_md": false, 00:25:17.615 "write_zeroes": true, 00:25:17.615 "zcopy": true, 00:25:17.615 "get_zone_info": false, 00:25:17.615 "zone_management": false, 00:25:17.615 "zone_append": false, 00:25:17.615 "compare": false, 00:25:17.615 "compare_and_write": false, 00:25:17.615 "abort": true, 00:25:17.615 "seek_hole": false, 00:25:17.615 "seek_data": false, 00:25:17.615 "copy": true, 00:25:17.615 "nvme_iov_md": false 00:25:17.615 }, 00:25:17.615 "memory_domains": [ 00:25:17.615 { 00:25:17.615 "dma_device_id": "system", 00:25:17.615 "dma_device_type": 1 00:25:17.615 }, 00:25:17.615 { 00:25:17.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.615 "dma_device_type": 2 00:25:17.615 } 00:25:17.615 ], 00:25:17.615 "driver_specific": {} 00:25:17.615 } 00:25:17.615 ] 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.616 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.875 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.875 "name": "Existed_Raid", 00:25:17.875 "uuid": "eff57140-edac-4234-9434-fcaf1ec50b7d", 00:25:17.875 "strip_size_kb": 0, 00:25:17.875 "state": "configuring", 00:25:17.875 "raid_level": "raid1", 00:25:17.875 "superblock": true, 00:25:17.875 "num_base_bdevs": 2, 00:25:17.875 "num_base_bdevs_discovered": 1, 00:25:17.875 "num_base_bdevs_operational": 2, 00:25:17.875 "base_bdevs_list": [ 00:25:17.875 { 00:25:17.875 "name": "BaseBdev1", 00:25:17.875 "uuid": "b9667196-419a-42d7-9640-39614733f89d", 00:25:17.875 "is_configured": true, 00:25:17.875 "data_offset": 256, 00:25:17.875 "data_size": 7936 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "name": "BaseBdev2", 00:25:17.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.875 "is_configured": false, 00:25:17.875 "data_offset": 0, 00:25:17.875 "data_size": 0 00:25:17.875 } 00:25:17.875 ] 00:25:17.875 }' 00:25:17.875 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.875 06:20:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.444 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:18.444 [2024-08-13 06:20:20.231830] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:18.444 [2024-08-13 06:20:20.231915] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:18.704 [2024-08-13 06:20:20.443510] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.704 [2024-08-13 06:20:20.445195] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:18.704 [2024-08-13 06:20:20.445259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.704 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.964 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.964 "name": "Existed_Raid", 00:25:18.964 "uuid": "f652b974-3ee2-4c47-8ea7-24a05747f5e1", 00:25:18.964 "strip_size_kb": 0, 00:25:18.964 "state": "configuring", 00:25:18.964 "raid_level": "raid1", 00:25:18.964 "superblock": true, 00:25:18.964 "num_base_bdevs": 2, 00:25:18.964 "num_base_bdevs_discovered": 1, 00:25:18.964 "num_base_bdevs_operational": 2, 00:25:18.964 "base_bdevs_list": [ 00:25:18.964 { 00:25:18.964 "name": "BaseBdev1", 00:25:18.964 "uuid": "b9667196-419a-42d7-9640-39614733f89d", 00:25:18.964 "is_configured": true, 00:25:18.964 "data_offset": 256, 00:25:18.964 "data_size": 7936 00:25:18.964 }, 00:25:18.964 { 00:25:18.964 "name": "BaseBdev2", 00:25:18.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.964 "is_configured": false, 00:25:18.964 "data_offset": 0, 00:25:18.964 "data_size": 0 00:25:18.964 } 00:25:18.964 ] 00:25:18.964 }' 00:25:18.964 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.964 06:20:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.533 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:25:19.793 [2024-08-13 06:20:21.362987] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.793 [2024-08-13 06:20:21.363665] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:25:19.793 [2024-08-13 06:20:21.363853] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:19.793 [2024-08-13 06:20:21.364214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:25:19.793 [2024-08-13 06:20:21.364552] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:25:19.793 [2024-08-13 06:20:21.364589] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:25:19.793 [2024-08-13 06:20:21.364895] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.793 BaseBdev2 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:19.793 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:20.053 [ 00:25:20.053 { 00:25:20.053 "name": "BaseBdev2", 00:25:20.053 "aliases": [ 00:25:20.053 "9b773525-9054-4653-990f-f23baf788843" 00:25:20.053 ], 00:25:20.053 "product_name": "Malloc disk", 00:25:20.053 "block_size": 4096, 00:25:20.053 "num_blocks": 8192, 00:25:20.053 "uuid": "9b773525-9054-4653-990f-f23baf788843", 00:25:20.053 "md_size": 32, 00:25:20.053 "md_interleave": false, 00:25:20.053 "dif_type": 0, 00:25:20.053 "assigned_rate_limits": { 00:25:20.053 "rw_ios_per_sec": 0, 00:25:20.053 "rw_mbytes_per_sec": 0, 00:25:20.053 "r_mbytes_per_sec": 0, 00:25:20.053 "w_mbytes_per_sec": 0 00:25:20.053 }, 00:25:20.053 "claimed": true, 00:25:20.053 "claim_type": "exclusive_write", 00:25:20.053 "zoned": false, 00:25:20.053 "supported_io_types": { 00:25:20.053 "read": true, 00:25:20.053 "write": true, 00:25:20.053 "unmap": true, 00:25:20.053 "flush": true, 00:25:20.053 "reset": true, 00:25:20.053 "nvme_admin": false, 00:25:20.053 "nvme_io": false, 00:25:20.053 "nvme_io_md": false, 00:25:20.053 "write_zeroes": true, 00:25:20.053 "zcopy": true, 00:25:20.053 "get_zone_info": false, 00:25:20.053 "zone_management": false, 00:25:20.053 "zone_append": false, 00:25:20.053 "compare": false, 00:25:20.053 "compare_and_write": false, 00:25:20.053 "abort": true, 00:25:20.053 "seek_hole": false, 00:25:20.053 "seek_data": false, 00:25:20.053 "copy": true, 00:25:20.053 "nvme_iov_md": false 00:25:20.053 }, 00:25:20.053 "memory_domains": [ 00:25:20.053 { 00:25:20.053 "dma_device_id": "system", 00:25:20.053 "dma_device_type": 1 00:25:20.053 }, 00:25:20.053 { 00:25:20.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.053 "dma_device_type": 2 00:25:20.053 } 00:25:20.053 ], 00:25:20.053 "driver_specific": {} 00:25:20.053 } 00:25:20.053 ] 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.053 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.313 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.313 "name": "Existed_Raid", 00:25:20.313 "uuid": "f652b974-3ee2-4c47-8ea7-24a05747f5e1", 00:25:20.313 "strip_size_kb": 0, 00:25:20.313 "state": "online", 00:25:20.313 "raid_level": "raid1", 00:25:20.313 "superblock": true, 00:25:20.313 "num_base_bdevs": 2, 00:25:20.313 "num_base_bdevs_discovered": 2, 00:25:20.313 "num_base_bdevs_operational": 2, 00:25:20.313 "base_bdevs_list": [ 00:25:20.313 { 00:25:20.313 "name": "BaseBdev1", 00:25:20.313 "uuid": "b9667196-419a-42d7-9640-39614733f89d", 00:25:20.313 "is_configured": true, 00:25:20.313 "data_offset": 256, 00:25:20.313 "data_size": 7936 00:25:20.313 }, 00:25:20.313 { 00:25:20.313 "name": "BaseBdev2", 00:25:20.313 "uuid": "9b773525-9054-4653-990f-f23baf788843", 00:25:20.313 "is_configured": true, 00:25:20.313 "data_offset": 256, 00:25:20.313 "data_size": 7936 00:25:20.313 } 00:25:20.313 ] 00:25:20.314 }' 00:25:20.314 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.314 06:20:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:20.883 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:21.143 [2024-08-13 06:20:22.721009] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:21.143 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:21.143 "name": "Existed_Raid", 00:25:21.143 "aliases": [ 00:25:21.143 "f652b974-3ee2-4c47-8ea7-24a05747f5e1" 00:25:21.143 ], 00:25:21.143 "product_name": "Raid Volume", 00:25:21.143 "block_size": 4096, 00:25:21.143 "num_blocks": 7936, 00:25:21.143 "uuid": "f652b974-3ee2-4c47-8ea7-24a05747f5e1", 00:25:21.143 "md_size": 32, 00:25:21.143 "md_interleave": false, 00:25:21.143 "dif_type": 0, 00:25:21.143 "assigned_rate_limits": { 00:25:21.143 "rw_ios_per_sec": 0, 00:25:21.143 "rw_mbytes_per_sec": 0, 00:25:21.143 "r_mbytes_per_sec": 0, 00:25:21.143 "w_mbytes_per_sec": 0 00:25:21.143 }, 00:25:21.143 "claimed": false, 00:25:21.143 "zoned": false, 00:25:21.143 "supported_io_types": { 00:25:21.143 "read": true, 00:25:21.143 "write": true, 00:25:21.143 "unmap": false, 00:25:21.143 "flush": false, 00:25:21.143 "reset": true, 00:25:21.143 "nvme_admin": false, 00:25:21.143 "nvme_io": false, 00:25:21.143 "nvme_io_md": false, 00:25:21.143 "write_zeroes": true, 00:25:21.143 "zcopy": false, 00:25:21.143 "get_zone_info": false, 00:25:21.143 "zone_management": false, 00:25:21.143 "zone_append": false, 00:25:21.143 "compare": false, 00:25:21.143 "compare_and_write": false, 00:25:21.143 "abort": false, 00:25:21.143 "seek_hole": false, 00:25:21.143 "seek_data": false, 00:25:21.143 "copy": false, 00:25:21.143 "nvme_iov_md": false 00:25:21.143 }, 00:25:21.143 "memory_domains": [ 00:25:21.143 { 00:25:21.143 "dma_device_id": "system", 00:25:21.143 "dma_device_type": 1 00:25:21.143 }, 00:25:21.143 { 00:25:21.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.143 "dma_device_type": 2 00:25:21.143 }, 00:25:21.143 { 00:25:21.143 "dma_device_id": "system", 00:25:21.143 "dma_device_type": 1 00:25:21.143 }, 00:25:21.143 { 00:25:21.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.143 "dma_device_type": 2 00:25:21.143 } 00:25:21.143 ], 00:25:21.143 "driver_specific": { 00:25:21.143 "raid": { 00:25:21.143 "uuid": "f652b974-3ee2-4c47-8ea7-24a05747f5e1", 00:25:21.143 "strip_size_kb": 0, 00:25:21.143 "state": "online", 00:25:21.143 "raid_level": "raid1", 00:25:21.143 "superblock": true, 00:25:21.143 "num_base_bdevs": 2, 00:25:21.143 "num_base_bdevs_discovered": 2, 00:25:21.143 "num_base_bdevs_operational": 2, 00:25:21.143 "base_bdevs_list": [ 00:25:21.143 { 00:25:21.143 "name": "BaseBdev1", 00:25:21.143 "uuid": "b9667196-419a-42d7-9640-39614733f89d", 00:25:21.143 "is_configured": true, 00:25:21.143 "data_offset": 256, 00:25:21.143 "data_size": 7936 00:25:21.143 }, 00:25:21.143 { 00:25:21.143 "name": "BaseBdev2", 00:25:21.143 "uuid": "9b773525-9054-4653-990f-f23baf788843", 00:25:21.143 "is_configured": true, 00:25:21.143 "data_offset": 256, 00:25:21.143 "data_size": 7936 00:25:21.143 } 00:25:21.143 ] 00:25:21.143 } 00:25:21.144 } 00:25:21.144 }' 00:25:21.144 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:21.144 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:21.144 BaseBdev2' 00:25:21.144 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:21.144 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:21.144 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:21.403 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:21.403 "name": "BaseBdev1", 00:25:21.403 "aliases": [ 00:25:21.403 "b9667196-419a-42d7-9640-39614733f89d" 00:25:21.403 ], 00:25:21.403 "product_name": "Malloc disk", 00:25:21.403 "block_size": 4096, 00:25:21.403 "num_blocks": 8192, 00:25:21.403 "uuid": "b9667196-419a-42d7-9640-39614733f89d", 00:25:21.403 "md_size": 32, 00:25:21.403 "md_interleave": false, 00:25:21.403 "dif_type": 0, 00:25:21.403 "assigned_rate_limits": { 00:25:21.403 "rw_ios_per_sec": 0, 00:25:21.403 "rw_mbytes_per_sec": 0, 00:25:21.403 "r_mbytes_per_sec": 0, 00:25:21.403 "w_mbytes_per_sec": 0 00:25:21.403 }, 00:25:21.403 "claimed": true, 00:25:21.404 "claim_type": "exclusive_write", 00:25:21.404 "zoned": false, 00:25:21.404 "supported_io_types": { 00:25:21.404 "read": true, 00:25:21.404 "write": true, 00:25:21.404 "unmap": true, 00:25:21.404 "flush": true, 00:25:21.404 "reset": true, 00:25:21.404 "nvme_admin": false, 00:25:21.404 "nvme_io": false, 00:25:21.404 "nvme_io_md": false, 00:25:21.404 "write_zeroes": true, 00:25:21.404 "zcopy": true, 00:25:21.404 "get_zone_info": false, 00:25:21.404 "zone_management": false, 00:25:21.404 "zone_append": false, 00:25:21.404 "compare": false, 00:25:21.404 "compare_and_write": false, 00:25:21.404 "abort": true, 00:25:21.404 "seek_hole": false, 00:25:21.404 "seek_data": false, 00:25:21.404 "copy": true, 00:25:21.404 "nvme_iov_md": false 00:25:21.404 }, 00:25:21.404 "memory_domains": [ 00:25:21.404 { 00:25:21.404 "dma_device_id": "system", 00:25:21.404 "dma_device_type": 1 00:25:21.404 }, 00:25:21.404 { 00:25:21.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.404 "dma_device_type": 2 00:25:21.404 } 00:25:21.404 ], 00:25:21.404 "driver_specific": {} 00:25:21.404 }' 00:25:21.404 06:20:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.404 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.404 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:21.404 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.404 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.404 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:21.404 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.663 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.663 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:21.663 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.663 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.663 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:21.663 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:21.664 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:21.664 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:21.923 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:21.923 "name": "BaseBdev2", 00:25:21.923 "aliases": [ 00:25:21.923 "9b773525-9054-4653-990f-f23baf788843" 00:25:21.923 ], 00:25:21.923 "product_name": "Malloc disk", 00:25:21.923 "block_size": 4096, 00:25:21.923 "num_blocks": 8192, 00:25:21.923 "uuid": "9b773525-9054-4653-990f-f23baf788843", 00:25:21.923 "md_size": 32, 00:25:21.923 "md_interleave": false, 00:25:21.923 "dif_type": 0, 00:25:21.923 "assigned_rate_limits": { 00:25:21.923 "rw_ios_per_sec": 0, 00:25:21.923 "rw_mbytes_per_sec": 0, 00:25:21.923 "r_mbytes_per_sec": 0, 00:25:21.923 "w_mbytes_per_sec": 0 00:25:21.923 }, 00:25:21.923 "claimed": true, 00:25:21.923 "claim_type": "exclusive_write", 00:25:21.923 "zoned": false, 00:25:21.923 "supported_io_types": { 00:25:21.923 "read": true, 00:25:21.923 "write": true, 00:25:21.923 "unmap": true, 00:25:21.923 "flush": true, 00:25:21.923 "reset": true, 00:25:21.923 "nvme_admin": false, 00:25:21.923 "nvme_io": false, 00:25:21.923 "nvme_io_md": false, 00:25:21.923 "write_zeroes": true, 00:25:21.923 "zcopy": true, 00:25:21.923 "get_zone_info": false, 00:25:21.923 "zone_management": false, 00:25:21.923 "zone_append": false, 00:25:21.923 "compare": false, 00:25:21.923 "compare_and_write": false, 00:25:21.923 "abort": true, 00:25:21.923 "seek_hole": false, 00:25:21.923 "seek_data": false, 00:25:21.923 "copy": true, 00:25:21.923 "nvme_iov_md": false 00:25:21.923 }, 00:25:21.923 "memory_domains": [ 00:25:21.923 { 00:25:21.923 "dma_device_id": "system", 00:25:21.923 "dma_device_type": 1 00:25:21.923 }, 00:25:21.923 { 00:25:21.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.923 "dma_device_type": 2 00:25:21.923 } 00:25:21.923 ], 00:25:21.923 "driver_specific": {} 00:25:21.923 }' 00:25:21.923 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.923 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.923 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:21.923 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.923 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:22.182 06:20:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:22.441 [2024-08-13 06:20:24.082428] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:22.441 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.442 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.701 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.701 "name": "Existed_Raid", 00:25:22.701 "uuid": "f652b974-3ee2-4c47-8ea7-24a05747f5e1", 00:25:22.701 "strip_size_kb": 0, 00:25:22.701 "state": "online", 00:25:22.701 "raid_level": "raid1", 00:25:22.701 "superblock": true, 00:25:22.701 "num_base_bdevs": 2, 00:25:22.701 "num_base_bdevs_discovered": 1, 00:25:22.701 "num_base_bdevs_operational": 1, 00:25:22.701 "base_bdevs_list": [ 00:25:22.701 { 00:25:22.701 "name": null, 00:25:22.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.701 "is_configured": false, 00:25:22.701 "data_offset": 256, 00:25:22.701 "data_size": 7936 00:25:22.701 }, 00:25:22.701 { 00:25:22.701 "name": "BaseBdev2", 00:25:22.701 "uuid": "9b773525-9054-4653-990f-f23baf788843", 00:25:22.701 "is_configured": true, 00:25:22.701 "data_offset": 256, 00:25:22.701 "data_size": 7936 00:25:22.701 } 00:25:22.701 ] 00:25:22.701 }' 00:25:22.701 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.701 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.271 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:23.271 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:23.271 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.271 06:20:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:23.271 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:23.271 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:23.271 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:23.531 [2024-08-13 06:20:25.224054] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:23.531 [2024-08-13 06:20:25.224186] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:23.531 [2024-08-13 06:20:25.236173] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.531 [2024-08-13 06:20:25.236271] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.531 [2024-08-13 06:20:25.236308] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:25:23.531 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:23.531 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:23.531 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.531 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 106826 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 106826 ']' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 106826 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106826 00:25:23.791 killing process with pid 106826 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106826' 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 106826 00:25:23.791 [2024-08-13 06:20:25.520887] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.791 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 106826 00:25:23.791 [2024-08-13 06:20:25.521829] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.051 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:25:24.051 00:25:24.051 real 0m9.223s 00:25:24.051 user 0m16.315s 00:25:24.051 sys 0m1.619s 00:25:24.051 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:24.051 06:20:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.051 ************************************ 00:25:24.051 END TEST raid_state_function_test_sb_md_separate 00:25:24.051 ************************************ 00:25:24.051 06:20:25 bdev_raid -- bdev/bdev_raid.sh@984 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:25:24.051 06:20:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:24.051 06:20:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:24.051 06:20:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:24.312 ************************************ 00:25:24.312 START TEST raid_superblock_test_md_separate 00:25:24.312 ************************************ 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=107161 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 107161 /var/tmp/spdk-raid.sock 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 107161 ']' 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:24.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:24.312 06:20:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.312 [2024-08-13 06:20:25.936771] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:25:24.312 [2024-08-13 06:20:25.936936] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107161 ] 00:25:24.312 [2024-08-13 06:20:26.082001] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.572 [2024-08-13 06:20:26.128824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.572 [2024-08-13 06:20:26.171764] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:24.572 [2024-08-13 06:20:26.171875] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:25.141 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:25:25.401 malloc1 00:25:25.401 06:20:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:25.401 [2024-08-13 06:20:27.128565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:25.401 [2024-08-13 06:20:27.128694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.401 [2024-08-13 06:20:27.128721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:25.401 [2024-08-13 06:20:27.128730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.401 [2024-08-13 06:20:27.130495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.401 [2024-08-13 06:20:27.130540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:25.401 pt1 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:25.401 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:25:25.661 malloc2 00:25:25.661 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:25.920 [2024-08-13 06:20:27.546332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:25.920 [2024-08-13 06:20:27.546440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.920 [2024-08-13 06:20:27.546477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:25.920 [2024-08-13 06:20:27.546505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.920 [2024-08-13 06:20:27.548253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.920 [2024-08-13 06:20:27.548323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:25.920 pt2 00:25:25.920 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:25:25.920 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:25:25.920 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:26.180 [2024-08-13 06:20:27.749986] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:26.180 [2024-08-13 06:20:27.751769] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:26.180 [2024-08-13 06:20:27.751979] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:25:26.180 [2024-08-13 06:20:27.752041] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:26.180 [2024-08-13 06:20:27.752141] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:25:26.180 [2024-08-13 06:20:27.752266] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:25:26.180 [2024-08-13 06:20:27.752304] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:25:26.180 [2024-08-13 06:20:27.752421] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.180 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.440 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.440 "name": "raid_bdev1", 00:25:26.440 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:26.440 "strip_size_kb": 0, 00:25:26.440 "state": "online", 00:25:26.440 "raid_level": "raid1", 00:25:26.440 "superblock": true, 00:25:26.440 "num_base_bdevs": 2, 00:25:26.440 "num_base_bdevs_discovered": 2, 00:25:26.440 "num_base_bdevs_operational": 2, 00:25:26.440 "base_bdevs_list": [ 00:25:26.440 { 00:25:26.440 "name": "pt1", 00:25:26.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:26.440 "is_configured": true, 00:25:26.440 "data_offset": 256, 00:25:26.440 "data_size": 7936 00:25:26.440 }, 00:25:26.440 { 00:25:26.440 "name": "pt2", 00:25:26.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:26.440 "is_configured": true, 00:25:26.440 "data_offset": 256, 00:25:26.440 "data_size": 7936 00:25:26.440 } 00:25:26.440 ] 00:25:26.440 }' 00:25:26.440 06:20:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.440 06:20:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:27.009 [2024-08-13 06:20:28.704528] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.009 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:27.009 "name": "raid_bdev1", 00:25:27.009 "aliases": [ 00:25:27.009 "57d15232-812f-4fdf-918a-e57b7b6e7570" 00:25:27.009 ], 00:25:27.009 "product_name": "Raid Volume", 00:25:27.009 "block_size": 4096, 00:25:27.009 "num_blocks": 7936, 00:25:27.009 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:27.009 "md_size": 32, 00:25:27.009 "md_interleave": false, 00:25:27.009 "dif_type": 0, 00:25:27.009 "assigned_rate_limits": { 00:25:27.009 "rw_ios_per_sec": 0, 00:25:27.009 "rw_mbytes_per_sec": 0, 00:25:27.009 "r_mbytes_per_sec": 0, 00:25:27.009 "w_mbytes_per_sec": 0 00:25:27.009 }, 00:25:27.009 "claimed": false, 00:25:27.009 "zoned": false, 00:25:27.009 "supported_io_types": { 00:25:27.009 "read": true, 00:25:27.009 "write": true, 00:25:27.009 "unmap": false, 00:25:27.009 "flush": false, 00:25:27.009 "reset": true, 00:25:27.009 "nvme_admin": false, 00:25:27.009 "nvme_io": false, 00:25:27.009 "nvme_io_md": false, 00:25:27.009 "write_zeroes": true, 00:25:27.009 "zcopy": false, 00:25:27.009 "get_zone_info": false, 00:25:27.009 "zone_management": false, 00:25:27.009 "zone_append": false, 00:25:27.010 "compare": false, 00:25:27.010 "compare_and_write": false, 00:25:27.010 "abort": false, 00:25:27.010 "seek_hole": false, 00:25:27.010 "seek_data": false, 00:25:27.010 "copy": false, 00:25:27.010 "nvme_iov_md": false 00:25:27.010 }, 00:25:27.010 "memory_domains": [ 00:25:27.010 { 00:25:27.010 "dma_device_id": "system", 00:25:27.010 "dma_device_type": 1 00:25:27.010 }, 00:25:27.010 { 00:25:27.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.010 "dma_device_type": 2 00:25:27.010 }, 00:25:27.010 { 00:25:27.010 "dma_device_id": "system", 00:25:27.010 "dma_device_type": 1 00:25:27.010 }, 00:25:27.010 { 00:25:27.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.010 "dma_device_type": 2 00:25:27.010 } 00:25:27.010 ], 00:25:27.010 "driver_specific": { 00:25:27.010 "raid": { 00:25:27.010 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:27.010 "strip_size_kb": 0, 00:25:27.010 "state": "online", 00:25:27.010 "raid_level": "raid1", 00:25:27.010 "superblock": true, 00:25:27.010 "num_base_bdevs": 2, 00:25:27.010 "num_base_bdevs_discovered": 2, 00:25:27.010 "num_base_bdevs_operational": 2, 00:25:27.010 "base_bdevs_list": [ 00:25:27.010 { 00:25:27.010 "name": "pt1", 00:25:27.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.010 "is_configured": true, 00:25:27.010 "data_offset": 256, 00:25:27.010 "data_size": 7936 00:25:27.010 }, 00:25:27.010 { 00:25:27.010 "name": "pt2", 00:25:27.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.010 "is_configured": true, 00:25:27.010 "data_offset": 256, 00:25:27.010 "data_size": 7936 00:25:27.010 } 00:25:27.010 ] 00:25:27.010 } 00:25:27.010 } 00:25:27.010 }' 00:25:27.010 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:27.010 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:27.010 pt2' 00:25:27.010 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:27.010 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:27.010 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:27.270 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:27.270 "name": "pt1", 00:25:27.270 "aliases": [ 00:25:27.270 "00000000-0000-0000-0000-000000000001" 00:25:27.270 ], 00:25:27.270 "product_name": "passthru", 00:25:27.270 "block_size": 4096, 00:25:27.270 "num_blocks": 8192, 00:25:27.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.270 "md_size": 32, 00:25:27.270 "md_interleave": false, 00:25:27.270 "dif_type": 0, 00:25:27.270 "assigned_rate_limits": { 00:25:27.270 "rw_ios_per_sec": 0, 00:25:27.270 "rw_mbytes_per_sec": 0, 00:25:27.270 "r_mbytes_per_sec": 0, 00:25:27.270 "w_mbytes_per_sec": 0 00:25:27.270 }, 00:25:27.270 "claimed": true, 00:25:27.270 "claim_type": "exclusive_write", 00:25:27.270 "zoned": false, 00:25:27.270 "supported_io_types": { 00:25:27.270 "read": true, 00:25:27.270 "write": true, 00:25:27.270 "unmap": true, 00:25:27.270 "flush": true, 00:25:27.270 "reset": true, 00:25:27.270 "nvme_admin": false, 00:25:27.270 "nvme_io": false, 00:25:27.270 "nvme_io_md": false, 00:25:27.270 "write_zeroes": true, 00:25:27.270 "zcopy": true, 00:25:27.270 "get_zone_info": false, 00:25:27.270 "zone_management": false, 00:25:27.270 "zone_append": false, 00:25:27.270 "compare": false, 00:25:27.270 "compare_and_write": false, 00:25:27.270 "abort": true, 00:25:27.270 "seek_hole": false, 00:25:27.270 "seek_data": false, 00:25:27.270 "copy": true, 00:25:27.270 "nvme_iov_md": false 00:25:27.270 }, 00:25:27.270 "memory_domains": [ 00:25:27.270 { 00:25:27.270 "dma_device_id": "system", 00:25:27.270 "dma_device_type": 1 00:25:27.270 }, 00:25:27.270 { 00:25:27.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.270 "dma_device_type": 2 00:25:27.270 } 00:25:27.270 ], 00:25:27.270 "driver_specific": { 00:25:27.270 "passthru": { 00:25:27.270 "name": "pt1", 00:25:27.270 "base_bdev_name": "malloc1" 00:25:27.270 } 00:25:27.270 } 00:25:27.270 }' 00:25:27.270 06:20:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.270 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.270 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:27.270 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:27.530 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:27.789 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:27.789 "name": "pt2", 00:25:27.789 "aliases": [ 00:25:27.789 "00000000-0000-0000-0000-000000000002" 00:25:27.789 ], 00:25:27.789 "product_name": "passthru", 00:25:27.789 "block_size": 4096, 00:25:27.789 "num_blocks": 8192, 00:25:27.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.789 "md_size": 32, 00:25:27.789 "md_interleave": false, 00:25:27.789 "dif_type": 0, 00:25:27.789 "assigned_rate_limits": { 00:25:27.789 "rw_ios_per_sec": 0, 00:25:27.789 "rw_mbytes_per_sec": 0, 00:25:27.789 "r_mbytes_per_sec": 0, 00:25:27.789 "w_mbytes_per_sec": 0 00:25:27.789 }, 00:25:27.789 "claimed": true, 00:25:27.789 "claim_type": "exclusive_write", 00:25:27.789 "zoned": false, 00:25:27.789 "supported_io_types": { 00:25:27.789 "read": true, 00:25:27.789 "write": true, 00:25:27.789 "unmap": true, 00:25:27.789 "flush": true, 00:25:27.789 "reset": true, 00:25:27.789 "nvme_admin": false, 00:25:27.789 "nvme_io": false, 00:25:27.789 "nvme_io_md": false, 00:25:27.789 "write_zeroes": true, 00:25:27.789 "zcopy": true, 00:25:27.789 "get_zone_info": false, 00:25:27.789 "zone_management": false, 00:25:27.789 "zone_append": false, 00:25:27.789 "compare": false, 00:25:27.789 "compare_and_write": false, 00:25:27.789 "abort": true, 00:25:27.789 "seek_hole": false, 00:25:27.789 "seek_data": false, 00:25:27.789 "copy": true, 00:25:27.789 "nvme_iov_md": false 00:25:27.789 }, 00:25:27.789 "memory_domains": [ 00:25:27.789 { 00:25:27.789 "dma_device_id": "system", 00:25:27.789 "dma_device_type": 1 00:25:27.789 }, 00:25:27.789 { 00:25:27.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.789 "dma_device_type": 2 00:25:27.789 } 00:25:27.789 ], 00:25:27.789 "driver_specific": { 00:25:27.789 "passthru": { 00:25:27.789 "name": "pt2", 00:25:27.789 "base_bdev_name": "malloc2" 00:25:27.789 } 00:25:27.789 } 00:25:27.789 }' 00:25:27.789 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.789 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.789 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:27.789 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:28.049 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:28.308 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:25:28.308 06:20:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:28.308 [2024-08-13 06:20:30.026354] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.308 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=57d15232-812f-4fdf-918a-e57b7b6e7570 00:25:28.308 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 57d15232-812f-4fdf-918a-e57b7b6e7570 ']' 00:25:28.308 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:28.568 [2024-08-13 06:20:30.209851] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.568 [2024-08-13 06:20:30.209872] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.568 [2024-08-13 06:20:30.209947] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.568 [2024-08-13 06:20:30.209998] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.568 [2024-08-13 06:20:30.210010] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:25:28.568 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.568 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:25:28.828 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:25:28.828 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:25:28.828 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:25:28.828 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:29.087 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:25:29.087 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:29.087 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:29.087 06:20:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@646 -- # local es=0 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:29.347 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:29.607 [2024-08-13 06:20:31.248004] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:29.607 [2024-08-13 06:20:31.249667] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:29.607 [2024-08-13 06:20:31.249757] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:29.607 [2024-08-13 06:20:31.249832] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:29.607 [2024-08-13 06:20:31.249883] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:29.607 [2024-08-13 06:20:31.249928] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:25:29.607 request: 00:25:29.607 { 00:25:29.607 "name": "raid_bdev1", 00:25:29.607 "raid_level": "raid1", 00:25:29.607 "base_bdevs": [ 00:25:29.607 "malloc1", 00:25:29.607 "malloc2" 00:25:29.607 ], 00:25:29.607 "superblock": false, 00:25:29.607 "method": "bdev_raid_create", 00:25:29.607 "req_id": 1 00:25:29.607 } 00:25:29.607 Got JSON-RPC error response 00:25:29.607 response: 00:25:29.607 { 00:25:29.607 "code": -17, 00:25:29.607 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:29.607 } 00:25:29.607 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # es=1 00:25:29.607 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:25:29.607 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:25:29.607 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:25:29.607 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.607 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:25:29.867 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:25:29.867 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:25:29.867 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:29.867 [2024-08-13 06:20:31.647254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:29.867 [2024-08-13 06:20:31.647348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.867 [2024-08-13 06:20:31.647380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:29.867 [2024-08-13 06:20:31.647412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.867 [2024-08-13 06:20:31.649134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.867 [2024-08-13 06:20:31.649201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:29.867 [2024-08-13 06:20:31.649261] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:29.867 [2024-08-13 06:20:31.649318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:29.867 pt1 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.127 "name": "raid_bdev1", 00:25:30.127 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:30.127 "strip_size_kb": 0, 00:25:30.127 "state": "configuring", 00:25:30.127 "raid_level": "raid1", 00:25:30.127 "superblock": true, 00:25:30.127 "num_base_bdevs": 2, 00:25:30.127 "num_base_bdevs_discovered": 1, 00:25:30.127 "num_base_bdevs_operational": 2, 00:25:30.127 "base_bdevs_list": [ 00:25:30.127 { 00:25:30.127 "name": "pt1", 00:25:30.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:30.127 "is_configured": true, 00:25:30.127 "data_offset": 256, 00:25:30.127 "data_size": 7936 00:25:30.127 }, 00:25:30.127 { 00:25:30.127 "name": null, 00:25:30.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:30.127 "is_configured": false, 00:25:30.127 "data_offset": 256, 00:25:30.127 "data_size": 7936 00:25:30.127 } 00:25:30.127 ] 00:25:30.127 }' 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.127 06:20:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.696 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:25:30.696 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:25:30.696 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:30.696 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:30.956 [2024-08-13 06:20:32.553774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:30.956 [2024-08-13 06:20:32.553858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.956 [2024-08-13 06:20:32.553887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:30.956 [2024-08-13 06:20:32.553911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.956 [2024-08-13 06:20:32.554059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.956 [2024-08-13 06:20:32.554110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:30.956 [2024-08-13 06:20:32.554182] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:30.956 [2024-08-13 06:20:32.554223] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:30.956 [2024-08-13 06:20:32.554348] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:25:30.956 [2024-08-13 06:20:32.554390] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:30.956 [2024-08-13 06:20:32.554474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:25:30.956 [2024-08-13 06:20:32.554582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:25:30.956 [2024-08-13 06:20:32.554618] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:25:30.956 [2024-08-13 06:20:32.554706] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.956 pt2 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.956 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.215 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.215 "name": "raid_bdev1", 00:25:31.215 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:31.215 "strip_size_kb": 0, 00:25:31.215 "state": "online", 00:25:31.215 "raid_level": "raid1", 00:25:31.215 "superblock": true, 00:25:31.215 "num_base_bdevs": 2, 00:25:31.215 "num_base_bdevs_discovered": 2, 00:25:31.215 "num_base_bdevs_operational": 2, 00:25:31.215 "base_bdevs_list": [ 00:25:31.215 { 00:25:31.215 "name": "pt1", 00:25:31.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:31.215 "is_configured": true, 00:25:31.215 "data_offset": 256, 00:25:31.215 "data_size": 7936 00:25:31.215 }, 00:25:31.215 { 00:25:31.215 "name": "pt2", 00:25:31.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:31.215 "is_configured": true, 00:25:31.215 "data_offset": 256, 00:25:31.215 "data_size": 7936 00:25:31.215 } 00:25:31.215 ] 00:25:31.215 }' 00:25:31.215 06:20:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.215 06:20:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:31.784 [2024-08-13 06:20:33.520359] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:31.784 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:31.784 "name": "raid_bdev1", 00:25:31.784 "aliases": [ 00:25:31.784 "57d15232-812f-4fdf-918a-e57b7b6e7570" 00:25:31.784 ], 00:25:31.784 "product_name": "Raid Volume", 00:25:31.784 "block_size": 4096, 00:25:31.784 "num_blocks": 7936, 00:25:31.784 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:31.784 "md_size": 32, 00:25:31.784 "md_interleave": false, 00:25:31.784 "dif_type": 0, 00:25:31.785 "assigned_rate_limits": { 00:25:31.785 "rw_ios_per_sec": 0, 00:25:31.785 "rw_mbytes_per_sec": 0, 00:25:31.785 "r_mbytes_per_sec": 0, 00:25:31.785 "w_mbytes_per_sec": 0 00:25:31.785 }, 00:25:31.785 "claimed": false, 00:25:31.785 "zoned": false, 00:25:31.785 "supported_io_types": { 00:25:31.785 "read": true, 00:25:31.785 "write": true, 00:25:31.785 "unmap": false, 00:25:31.785 "flush": false, 00:25:31.785 "reset": true, 00:25:31.785 "nvme_admin": false, 00:25:31.785 "nvme_io": false, 00:25:31.785 "nvme_io_md": false, 00:25:31.785 "write_zeroes": true, 00:25:31.785 "zcopy": false, 00:25:31.785 "get_zone_info": false, 00:25:31.785 "zone_management": false, 00:25:31.785 "zone_append": false, 00:25:31.785 "compare": false, 00:25:31.785 "compare_and_write": false, 00:25:31.785 "abort": false, 00:25:31.785 "seek_hole": false, 00:25:31.785 "seek_data": false, 00:25:31.785 "copy": false, 00:25:31.785 "nvme_iov_md": false 00:25:31.785 }, 00:25:31.785 "memory_domains": [ 00:25:31.785 { 00:25:31.785 "dma_device_id": "system", 00:25:31.785 "dma_device_type": 1 00:25:31.785 }, 00:25:31.785 { 00:25:31.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.785 "dma_device_type": 2 00:25:31.785 }, 00:25:31.785 { 00:25:31.785 "dma_device_id": "system", 00:25:31.785 "dma_device_type": 1 00:25:31.785 }, 00:25:31.785 { 00:25:31.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.785 "dma_device_type": 2 00:25:31.785 } 00:25:31.785 ], 00:25:31.785 "driver_specific": { 00:25:31.785 "raid": { 00:25:31.785 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:31.785 "strip_size_kb": 0, 00:25:31.785 "state": "online", 00:25:31.785 "raid_level": "raid1", 00:25:31.785 "superblock": true, 00:25:31.785 "num_base_bdevs": 2, 00:25:31.785 "num_base_bdevs_discovered": 2, 00:25:31.785 "num_base_bdevs_operational": 2, 00:25:31.785 "base_bdevs_list": [ 00:25:31.785 { 00:25:31.785 "name": "pt1", 00:25:31.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:31.785 "is_configured": true, 00:25:31.785 "data_offset": 256, 00:25:31.785 "data_size": 7936 00:25:31.785 }, 00:25:31.785 { 00:25:31.785 "name": "pt2", 00:25:31.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:31.785 "is_configured": true, 00:25:31.785 "data_offset": 256, 00:25:31.785 "data_size": 7936 00:25:31.785 } 00:25:31.785 ] 00:25:31.785 } 00:25:31.785 } 00:25:31.785 }' 00:25:31.785 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:31.785 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:31.785 pt2' 00:25:31.785 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.785 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.785 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:32.051 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.051 "name": "pt1", 00:25:32.051 "aliases": [ 00:25:32.051 "00000000-0000-0000-0000-000000000001" 00:25:32.051 ], 00:25:32.051 "product_name": "passthru", 00:25:32.051 "block_size": 4096, 00:25:32.051 "num_blocks": 8192, 00:25:32.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:32.051 "md_size": 32, 00:25:32.051 "md_interleave": false, 00:25:32.051 "dif_type": 0, 00:25:32.051 "assigned_rate_limits": { 00:25:32.051 "rw_ios_per_sec": 0, 00:25:32.051 "rw_mbytes_per_sec": 0, 00:25:32.051 "r_mbytes_per_sec": 0, 00:25:32.051 "w_mbytes_per_sec": 0 00:25:32.051 }, 00:25:32.051 "claimed": true, 00:25:32.051 "claim_type": "exclusive_write", 00:25:32.051 "zoned": false, 00:25:32.051 "supported_io_types": { 00:25:32.051 "read": true, 00:25:32.051 "write": true, 00:25:32.051 "unmap": true, 00:25:32.051 "flush": true, 00:25:32.051 "reset": true, 00:25:32.051 "nvme_admin": false, 00:25:32.051 "nvme_io": false, 00:25:32.051 "nvme_io_md": false, 00:25:32.051 "write_zeroes": true, 00:25:32.051 "zcopy": true, 00:25:32.051 "get_zone_info": false, 00:25:32.051 "zone_management": false, 00:25:32.051 "zone_append": false, 00:25:32.051 "compare": false, 00:25:32.051 "compare_and_write": false, 00:25:32.051 "abort": true, 00:25:32.051 "seek_hole": false, 00:25:32.051 "seek_data": false, 00:25:32.051 "copy": true, 00:25:32.051 "nvme_iov_md": false 00:25:32.051 }, 00:25:32.051 "memory_domains": [ 00:25:32.051 { 00:25:32.051 "dma_device_id": "system", 00:25:32.051 "dma_device_type": 1 00:25:32.051 }, 00:25:32.051 { 00:25:32.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.051 "dma_device_type": 2 00:25:32.051 } 00:25:32.051 ], 00:25:32.051 "driver_specific": { 00:25:32.051 "passthru": { 00:25:32.051 "name": "pt1", 00:25:32.051 "base_bdev_name": "malloc1" 00:25:32.051 } 00:25:32.051 } 00:25:32.051 }' 00:25:32.051 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.051 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.316 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:32.316 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.316 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.316 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:32.316 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.316 06:20:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:32.316 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:32.576 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.576 "name": "pt2", 00:25:32.576 "aliases": [ 00:25:32.576 "00000000-0000-0000-0000-000000000002" 00:25:32.576 ], 00:25:32.576 "product_name": "passthru", 00:25:32.576 "block_size": 4096, 00:25:32.576 "num_blocks": 8192, 00:25:32.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:32.576 "md_size": 32, 00:25:32.576 "md_interleave": false, 00:25:32.576 "dif_type": 0, 00:25:32.576 "assigned_rate_limits": { 00:25:32.576 "rw_ios_per_sec": 0, 00:25:32.576 "rw_mbytes_per_sec": 0, 00:25:32.576 "r_mbytes_per_sec": 0, 00:25:32.576 "w_mbytes_per_sec": 0 00:25:32.576 }, 00:25:32.576 "claimed": true, 00:25:32.576 "claim_type": "exclusive_write", 00:25:32.576 "zoned": false, 00:25:32.576 "supported_io_types": { 00:25:32.576 "read": true, 00:25:32.576 "write": true, 00:25:32.576 "unmap": true, 00:25:32.576 "flush": true, 00:25:32.576 "reset": true, 00:25:32.576 "nvme_admin": false, 00:25:32.576 "nvme_io": false, 00:25:32.576 "nvme_io_md": false, 00:25:32.576 "write_zeroes": true, 00:25:32.576 "zcopy": true, 00:25:32.576 "get_zone_info": false, 00:25:32.576 "zone_management": false, 00:25:32.576 "zone_append": false, 00:25:32.576 "compare": false, 00:25:32.576 "compare_and_write": false, 00:25:32.576 "abort": true, 00:25:32.576 "seek_hole": false, 00:25:32.576 "seek_data": false, 00:25:32.576 "copy": true, 00:25:32.576 "nvme_iov_md": false 00:25:32.576 }, 00:25:32.576 "memory_domains": [ 00:25:32.576 { 00:25:32.576 "dma_device_id": "system", 00:25:32.576 "dma_device_type": 1 00:25:32.576 }, 00:25:32.576 { 00:25:32.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.576 "dma_device_type": 2 00:25:32.576 } 00:25:32.576 ], 00:25:32.576 "driver_specific": { 00:25:32.576 "passthru": { 00:25:32.576 "name": "pt2", 00:25:32.576 "base_bdev_name": "malloc2" 00:25:32.576 } 00:25:32.576 } 00:25:32.576 }' 00:25:32.576 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.576 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:32.835 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:25:33.095 [2024-08-13 06:20:34.750453] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:33.095 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 57d15232-812f-4fdf-918a-e57b7b6e7570 '!=' 57d15232-812f-4fdf-918a-e57b7b6e7570 ']' 00:25:33.095 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:25:33.095 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:33.095 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:25:33.095 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:33.354 [2024-08-13 06:20:34.957923] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.354 06:20:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.626 06:20:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.626 "name": "raid_bdev1", 00:25:33.626 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:33.626 "strip_size_kb": 0, 00:25:33.626 "state": "online", 00:25:33.626 "raid_level": "raid1", 00:25:33.626 "superblock": true, 00:25:33.626 "num_base_bdevs": 2, 00:25:33.626 "num_base_bdevs_discovered": 1, 00:25:33.626 "num_base_bdevs_operational": 1, 00:25:33.626 "base_bdevs_list": [ 00:25:33.626 { 00:25:33.626 "name": null, 00:25:33.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.626 "is_configured": false, 00:25:33.626 "data_offset": 256, 00:25:33.626 "data_size": 7936 00:25:33.626 }, 00:25:33.626 { 00:25:33.626 "name": "pt2", 00:25:33.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:33.626 "is_configured": true, 00:25:33.626 "data_offset": 256, 00:25:33.626 "data_size": 7936 00:25:33.626 } 00:25:33.626 ] 00:25:33.626 }' 00:25:33.626 06:20:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.626 06:20:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.240 06:20:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:34.240 [2024-08-13 06:20:35.928197] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:34.240 [2024-08-13 06:20:35.928264] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:34.240 [2024-08-13 06:20:35.928330] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:34.240 [2024-08-13 06:20:35.928384] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:34.240 [2024-08-13 06:20:35.928434] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:25:34.240 06:20:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.240 06:20:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:25:34.514 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:25:34.514 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:25:34.514 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:34.514 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:25:34.514 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:34.774 [2024-08-13 06:20:36.499171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:34.774 [2024-08-13 06:20:36.499264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.774 [2024-08-13 06:20:36.499295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:34.774 [2024-08-13 06:20:36.499330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.774 [2024-08-13 06:20:36.501062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.774 [2024-08-13 06:20:36.501131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:34.774 [2024-08-13 06:20:36.501189] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:34.774 [2024-08-13 06:20:36.501239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:34.774 [2024-08-13 06:20:36.501332] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:25:34.774 [2024-08-13 06:20:36.501369] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:34.774 [2024-08-13 06:20:36.501442] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:25:34.774 [2024-08-13 06:20:36.501534] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:25:34.774 [2024-08-13 06:20:36.501572] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:25:34.774 [2024-08-13 06:20:36.501655] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.774 pt2 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.774 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.033 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.033 "name": "raid_bdev1", 00:25:35.033 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:35.033 "strip_size_kb": 0, 00:25:35.033 "state": "online", 00:25:35.033 "raid_level": "raid1", 00:25:35.033 "superblock": true, 00:25:35.033 "num_base_bdevs": 2, 00:25:35.033 "num_base_bdevs_discovered": 1, 00:25:35.033 "num_base_bdevs_operational": 1, 00:25:35.033 "base_bdevs_list": [ 00:25:35.033 { 00:25:35.033 "name": null, 00:25:35.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.033 "is_configured": false, 00:25:35.033 "data_offset": 256, 00:25:35.033 "data_size": 7936 00:25:35.033 }, 00:25:35.033 { 00:25:35.033 "name": "pt2", 00:25:35.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.033 "is_configured": true, 00:25:35.033 "data_offset": 256, 00:25:35.033 "data_size": 7936 00:25:35.033 } 00:25:35.033 ] 00:25:35.033 }' 00:25:35.033 06:20:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.033 06:20:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.601 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:35.860 [2024-08-13 06:20:37.485754] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.860 [2024-08-13 06:20:37.485819] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.860 [2024-08-13 06:20:37.485877] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.860 [2024-08-13 06:20:37.485927] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.860 [2024-08-13 06:20:37.485955] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:25:35.860 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:25:35.860 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:36.120 [2024-08-13 06:20:37.865129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:36.120 [2024-08-13 06:20:37.865209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.120 [2024-08-13 06:20:37.865240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:36.120 [2024-08-13 06:20:37.865265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.120 [2024-08-13 06:20:37.866987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.120 [2024-08-13 06:20:37.867064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:36.120 [2024-08-13 06:20:37.867123] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:36.120 [2024-08-13 06:20:37.867176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:36.120 [2024-08-13 06:20:37.867306] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:36.120 [2024-08-13 06:20:37.867359] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.120 [2024-08-13 06:20:37.867398] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:25:36.120 [2024-08-13 06:20:37.867463] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:36.120 [2024-08-13 06:20:37.867550] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:25:36.120 [2024-08-13 06:20:37.867585] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:36.120 [2024-08-13 06:20:37.867650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:36.120 [2024-08-13 06:20:37.867738] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:25:36.120 [2024-08-13 06:20:37.867774] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:25:36.120 [2024-08-13 06:20:37.867860] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.120 pt1 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.120 06:20:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.379 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.379 "name": "raid_bdev1", 00:25:36.379 "uuid": "57d15232-812f-4fdf-918a-e57b7b6e7570", 00:25:36.379 "strip_size_kb": 0, 00:25:36.379 "state": "online", 00:25:36.379 "raid_level": "raid1", 00:25:36.379 "superblock": true, 00:25:36.379 "num_base_bdevs": 2, 00:25:36.379 "num_base_bdevs_discovered": 1, 00:25:36.379 "num_base_bdevs_operational": 1, 00:25:36.379 "base_bdevs_list": [ 00:25:36.379 { 00:25:36.379 "name": null, 00:25:36.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.379 "is_configured": false, 00:25:36.379 "data_offset": 256, 00:25:36.379 "data_size": 7936 00:25:36.379 }, 00:25:36.379 { 00:25:36.379 "name": "pt2", 00:25:36.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.379 "is_configured": true, 00:25:36.379 "data_offset": 256, 00:25:36.379 "data_size": 7936 00:25:36.379 } 00:25:36.379 ] 00:25:36.379 }' 00:25:36.379 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.379 06:20:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.947 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:25:36.948 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:37.207 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:25:37.207 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:37.207 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:25:37.207 [2024-08-13 06:20:38.967514] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.207 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 57d15232-812f-4fdf-918a-e57b7b6e7570 '!=' 57d15232-812f-4fdf-918a-e57b7b6e7570 ']' 00:25:37.467 06:20:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 107161 00:25:37.467 06:20:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 107161 ']' 00:25:37.467 06:20:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 107161 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107161 00:25:37.467 killing process with pid 107161 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107161' 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 107161 00:25:37.467 [2024-08-13 06:20:39.034816] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:37.467 [2024-08-13 06:20:39.034877] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.467 [2024-08-13 06:20:39.034908] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.467 [2024-08-13 06:20:39.034917] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:25:37.467 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 107161 00:25:37.467 [2024-08-13 06:20:39.058146] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:37.727 06:20:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:25:37.727 00:25:37.727 real 0m13.446s 00:25:37.727 user 0m24.345s 00:25:37.727 sys 0m2.386s 00:25:37.727 ************************************ 00:25:37.727 END TEST raid_superblock_test_md_separate 00:25:37.727 ************************************ 00:25:37.727 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:37.727 06:20:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.727 06:20:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # '[' true = true ']' 00:25:37.727 06:20:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:25:37.728 06:20:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:25:37.728 06:20:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:37.728 06:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.728 ************************************ 00:25:37.728 START TEST raid_rebuild_test_sb_md_separate 00:25:37.728 ************************************ 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=107634 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 107634 /var/tmp/spdk-raid.sock 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 107634 ']' 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:37.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:37.728 06:20:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.728 [2024-08-13 06:20:39.480240] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:25:37.728 [2024-08-13 06:20:39.480456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107634 ] 00:25:37.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:37.728 Zero copy mechanism will not be used. 00:25:37.988 [2024-08-13 06:20:39.622851] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.988 [2024-08-13 06:20:39.666602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.988 [2024-08-13 06:20:39.708552] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:37.988 [2024-08-13 06:20:39.708664] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:38.557 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:38.557 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:25:38.557 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:38.557 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:25:38.816 BaseBdev1_malloc 00:25:38.816 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:39.076 [2024-08-13 06:20:40.665015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:39.076 [2024-08-13 06:20:40.665166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.076 [2024-08-13 06:20:40.665190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:39.076 [2024-08-13 06:20:40.665201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.076 [2024-08-13 06:20:40.666963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.076 [2024-08-13 06:20:40.667012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:39.076 BaseBdev1 00:25:39.076 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:39.076 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:25:39.336 BaseBdev2_malloc 00:25:39.336 06:20:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:39.336 [2024-08-13 06:20:41.105232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:39.336 [2024-08-13 06:20:41.105285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.336 [2024-08-13 06:20:41.105302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:39.336 [2024-08-13 06:20:41.105315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.336 [2024-08-13 06:20:41.107008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.336 [2024-08-13 06:20:41.107064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:39.336 BaseBdev2 00:25:39.336 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:25:39.595 spare_malloc 00:25:39.595 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:39.854 spare_delay 00:25:39.854 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:40.112 [2024-08-13 06:20:41.673337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:40.112 [2024-08-13 06:20:41.673390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.112 [2024-08-13 06:20:41.673406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:40.112 [2024-08-13 06:20:41.673416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.112 [2024-08-13 06:20:41.675209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.112 [2024-08-13 06:20:41.675249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:40.112 spare 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:40.112 [2024-08-13 06:20:41.869168] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.112 [2024-08-13 06:20:41.870925] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:40.112 [2024-08-13 06:20:41.871099] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:25:40.112 [2024-08-13 06:20:41.871119] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:40.112 [2024-08-13 06:20:41.871212] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:25:40.112 [2024-08-13 06:20:41.871307] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:25:40.112 [2024-08-13 06:20:41.871324] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:25:40.112 [2024-08-13 06:20:41.871414] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:40.112 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:40.371 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.371 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.371 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.371 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.372 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.372 06:20:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.372 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.372 "name": "raid_bdev1", 00:25:40.372 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:40.372 "strip_size_kb": 0, 00:25:40.372 "state": "online", 00:25:40.372 "raid_level": "raid1", 00:25:40.372 "superblock": true, 00:25:40.372 "num_base_bdevs": 2, 00:25:40.372 "num_base_bdevs_discovered": 2, 00:25:40.372 "num_base_bdevs_operational": 2, 00:25:40.372 "base_bdevs_list": [ 00:25:40.372 { 00:25:40.372 "name": "BaseBdev1", 00:25:40.372 "uuid": "084b1c68-2307-5c90-8f5a-532e92c47581", 00:25:40.372 "is_configured": true, 00:25:40.372 "data_offset": 256, 00:25:40.372 "data_size": 7936 00:25:40.372 }, 00:25:40.372 { 00:25:40.372 "name": "BaseBdev2", 00:25:40.372 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:40.372 "is_configured": true, 00:25:40.372 "data_offset": 256, 00:25:40.372 "data_size": 7936 00:25:40.372 } 00:25:40.372 ] 00:25:40.372 }' 00:25:40.372 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.372 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:40.941 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:40.941 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:41.202 [2024-08-13 06:20:42.763794] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:41.203 06:20:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:41.464 [2024-08-13 06:20:43.150983] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:41.464 /dev/nbd0 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.464 1+0 records in 00:25:41.464 1+0 records out 00:25:41.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778638 s, 5.3 MB/s 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:25:41.464 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:42.037 7936+0 records in 00:25:42.037 7936+0 records out 00:25:42.037 32505856 bytes (33 MB, 31 MiB) copied, 0.601981 s, 54.0 MB/s 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.037 06:20:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:42.301 [2024-08-13 06:20:44.019080] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.301 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:42.561 [2024-08-13 06:20:44.222781] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.561 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.826 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.826 "name": "raid_bdev1", 00:25:42.826 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:42.826 "strip_size_kb": 0, 00:25:42.826 "state": "online", 00:25:42.826 "raid_level": "raid1", 00:25:42.826 "superblock": true, 00:25:42.826 "num_base_bdevs": 2, 00:25:42.826 "num_base_bdevs_discovered": 1, 00:25:42.826 "num_base_bdevs_operational": 1, 00:25:42.826 "base_bdevs_list": [ 00:25:42.826 { 00:25:42.826 "name": null, 00:25:42.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.826 "is_configured": false, 00:25:42.826 "data_offset": 256, 00:25:42.826 "data_size": 7936 00:25:42.826 }, 00:25:42.826 { 00:25:42.826 "name": "BaseBdev2", 00:25:42.826 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:42.826 "is_configured": true, 00:25:42.826 "data_offset": 256, 00:25:42.826 "data_size": 7936 00:25:42.826 } 00:25:42.826 ] 00:25:42.826 }' 00:25:42.826 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.826 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:43.397 06:20:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:43.397 [2024-08-13 06:20:45.129225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:43.397 [2024-08-13 06:20:45.130931] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:25:43.397 [2024-08-13 06:20:45.132620] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:43.397 06:20:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.777 "name": "raid_bdev1", 00:25:44.777 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:44.777 "strip_size_kb": 0, 00:25:44.777 "state": "online", 00:25:44.777 "raid_level": "raid1", 00:25:44.777 "superblock": true, 00:25:44.777 "num_base_bdevs": 2, 00:25:44.777 "num_base_bdevs_discovered": 2, 00:25:44.777 "num_base_bdevs_operational": 2, 00:25:44.777 "process": { 00:25:44.777 "type": "rebuild", 00:25:44.777 "target": "spare", 00:25:44.777 "progress": { 00:25:44.777 "blocks": 3072, 00:25:44.777 "percent": 38 00:25:44.777 } 00:25:44.777 }, 00:25:44.777 "base_bdevs_list": [ 00:25:44.777 { 00:25:44.777 "name": "spare", 00:25:44.777 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:44.777 "is_configured": true, 00:25:44.777 "data_offset": 256, 00:25:44.777 "data_size": 7936 00:25:44.777 }, 00:25:44.777 { 00:25:44.777 "name": "BaseBdev2", 00:25:44.777 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:44.777 "is_configured": true, 00:25:44.777 "data_offset": 256, 00:25:44.777 "data_size": 7936 00:25:44.777 } 00:25:44.777 ] 00:25:44.777 }' 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.777 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:45.037 [2024-08-13 06:20:46.635194] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:45.037 [2024-08-13 06:20:46.638151] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:45.037 [2024-08-13 06:20:46.638212] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.037 [2024-08-13 06:20:46.638227] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:45.037 [2024-08-13 06:20:46.638239] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.037 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.038 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.297 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.297 "name": "raid_bdev1", 00:25:45.297 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:45.297 "strip_size_kb": 0, 00:25:45.297 "state": "online", 00:25:45.297 "raid_level": "raid1", 00:25:45.297 "superblock": true, 00:25:45.297 "num_base_bdevs": 2, 00:25:45.297 "num_base_bdevs_discovered": 1, 00:25:45.297 "num_base_bdevs_operational": 1, 00:25:45.297 "base_bdevs_list": [ 00:25:45.297 { 00:25:45.297 "name": null, 00:25:45.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.298 "is_configured": false, 00:25:45.298 "data_offset": 256, 00:25:45.298 "data_size": 7936 00:25:45.298 }, 00:25:45.298 { 00:25:45.298 "name": "BaseBdev2", 00:25:45.298 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:45.298 "is_configured": true, 00:25:45.298 "data_offset": 256, 00:25:45.298 "data_size": 7936 00:25:45.298 } 00:25:45.298 ] 00:25:45.298 }' 00:25:45.298 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.298 06:20:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.866 "name": "raid_bdev1", 00:25:45.866 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:45.866 "strip_size_kb": 0, 00:25:45.866 "state": "online", 00:25:45.866 "raid_level": "raid1", 00:25:45.866 "superblock": true, 00:25:45.866 "num_base_bdevs": 2, 00:25:45.866 "num_base_bdevs_discovered": 1, 00:25:45.866 "num_base_bdevs_operational": 1, 00:25:45.866 "base_bdevs_list": [ 00:25:45.866 { 00:25:45.866 "name": null, 00:25:45.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.866 "is_configured": false, 00:25:45.866 "data_offset": 256, 00:25:45.866 "data_size": 7936 00:25:45.866 }, 00:25:45.866 { 00:25:45.866 "name": "BaseBdev2", 00:25:45.866 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:45.866 "is_configured": true, 00:25:45.866 "data_offset": 256, 00:25:45.866 "data_size": 7936 00:25:45.866 } 00:25:45.866 ] 00:25:45.866 }' 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:45.866 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:46.125 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:46.125 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:46.125 [2024-08-13 06:20:47.850383] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:46.125 [2024-08-13 06:20:47.851833] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:25:46.125 [2024-08-13 06:20:47.853521] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:46.125 06:20:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.503 06:20:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:47.503 "name": "raid_bdev1", 00:25:47.503 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:47.503 "strip_size_kb": 0, 00:25:47.503 "state": "online", 00:25:47.503 "raid_level": "raid1", 00:25:47.503 "superblock": true, 00:25:47.503 "num_base_bdevs": 2, 00:25:47.503 "num_base_bdevs_discovered": 2, 00:25:47.503 "num_base_bdevs_operational": 2, 00:25:47.503 "process": { 00:25:47.503 "type": "rebuild", 00:25:47.503 "target": "spare", 00:25:47.503 "progress": { 00:25:47.503 "blocks": 3072, 00:25:47.503 "percent": 38 00:25:47.503 } 00:25:47.503 }, 00:25:47.503 "base_bdevs_list": [ 00:25:47.503 { 00:25:47.503 "name": "spare", 00:25:47.503 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:47.503 "is_configured": true, 00:25:47.503 "data_offset": 256, 00:25:47.503 "data_size": 7936 00:25:47.503 }, 00:25:47.503 { 00:25:47.503 "name": "BaseBdev2", 00:25:47.503 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:47.503 "is_configured": true, 00:25:47.503 "data_offset": 256, 00:25:47.503 "data_size": 7936 00:25:47.503 } 00:25:47.503 ] 00:25:47.503 }' 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:47.503 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:47.503 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1201 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.504 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.763 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:47.763 "name": "raid_bdev1", 00:25:47.763 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:47.763 "strip_size_kb": 0, 00:25:47.763 "state": "online", 00:25:47.763 "raid_level": "raid1", 00:25:47.763 "superblock": true, 00:25:47.763 "num_base_bdevs": 2, 00:25:47.763 "num_base_bdevs_discovered": 2, 00:25:47.763 "num_base_bdevs_operational": 2, 00:25:47.763 "process": { 00:25:47.763 "type": "rebuild", 00:25:47.763 "target": "spare", 00:25:47.763 "progress": { 00:25:47.763 "blocks": 3584, 00:25:47.763 "percent": 45 00:25:47.763 } 00:25:47.763 }, 00:25:47.763 "base_bdevs_list": [ 00:25:47.763 { 00:25:47.763 "name": "spare", 00:25:47.763 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:47.763 "is_configured": true, 00:25:47.763 "data_offset": 256, 00:25:47.763 "data_size": 7936 00:25:47.763 }, 00:25:47.763 { 00:25:47.763 "name": "BaseBdev2", 00:25:47.763 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:47.763 "is_configured": true, 00:25:47.763 "data_offset": 256, 00:25:47.763 "data_size": 7936 00:25:47.763 } 00:25:47.763 ] 00:25:47.763 }' 00:25:47.763 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:47.763 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.763 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:47.763 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.763 06:20:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.702 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.961 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.961 "name": "raid_bdev1", 00:25:48.962 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:48.962 "strip_size_kb": 0, 00:25:48.962 "state": "online", 00:25:48.962 "raid_level": "raid1", 00:25:48.962 "superblock": true, 00:25:48.962 "num_base_bdevs": 2, 00:25:48.962 "num_base_bdevs_discovered": 2, 00:25:48.962 "num_base_bdevs_operational": 2, 00:25:48.962 "process": { 00:25:48.962 "type": "rebuild", 00:25:48.962 "target": "spare", 00:25:48.962 "progress": { 00:25:48.962 "blocks": 6912, 00:25:48.962 "percent": 87 00:25:48.962 } 00:25:48.962 }, 00:25:48.962 "base_bdevs_list": [ 00:25:48.962 { 00:25:48.962 "name": "spare", 00:25:48.962 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:48.962 "is_configured": true, 00:25:48.962 "data_offset": 256, 00:25:48.962 "data_size": 7936 00:25:48.962 }, 00:25:48.962 { 00:25:48.962 "name": "BaseBdev2", 00:25:48.962 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:48.962 "is_configured": true, 00:25:48.962 "data_offset": 256, 00:25:48.962 "data_size": 7936 00:25:48.962 } 00:25:48.962 ] 00:25:48.962 }' 00:25:48.962 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.962 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.962 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.221 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.221 06:20:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:49.221 [2024-08-13 06:20:50.963446] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:49.221 [2024-08-13 06:20:50.963510] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:49.221 [2024-08-13 06:20:50.963606] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.160 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.419 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.419 "name": "raid_bdev1", 00:25:50.419 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:50.419 "strip_size_kb": 0, 00:25:50.419 "state": "online", 00:25:50.419 "raid_level": "raid1", 00:25:50.419 "superblock": true, 00:25:50.419 "num_base_bdevs": 2, 00:25:50.419 "num_base_bdevs_discovered": 2, 00:25:50.419 "num_base_bdevs_operational": 2, 00:25:50.419 "base_bdevs_list": [ 00:25:50.419 { 00:25:50.419 "name": "spare", 00:25:50.419 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:50.419 "is_configured": true, 00:25:50.419 "data_offset": 256, 00:25:50.419 "data_size": 7936 00:25:50.419 }, 00:25:50.419 { 00:25:50.419 "name": "BaseBdev2", 00:25:50.419 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:50.419 "is_configured": true, 00:25:50.419 "data_offset": 256, 00:25:50.419 "data_size": 7936 00:25:50.419 } 00:25:50.419 ] 00:25:50.419 }' 00:25:50.419 06:20:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.419 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.679 "name": "raid_bdev1", 00:25:50.679 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:50.679 "strip_size_kb": 0, 00:25:50.679 "state": "online", 00:25:50.679 "raid_level": "raid1", 00:25:50.679 "superblock": true, 00:25:50.679 "num_base_bdevs": 2, 00:25:50.679 "num_base_bdevs_discovered": 2, 00:25:50.679 "num_base_bdevs_operational": 2, 00:25:50.679 "base_bdevs_list": [ 00:25:50.679 { 00:25:50.679 "name": "spare", 00:25:50.679 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:50.679 "is_configured": true, 00:25:50.679 "data_offset": 256, 00:25:50.679 "data_size": 7936 00:25:50.679 }, 00:25:50.679 { 00:25:50.679 "name": "BaseBdev2", 00:25:50.679 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:50.679 "is_configured": true, 00:25:50.679 "data_offset": 256, 00:25:50.679 "data_size": 7936 00:25:50.679 } 00:25:50.679 ] 00:25:50.679 }' 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.679 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.938 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.938 "name": "raid_bdev1", 00:25:50.938 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:50.938 "strip_size_kb": 0, 00:25:50.938 "state": "online", 00:25:50.938 "raid_level": "raid1", 00:25:50.938 "superblock": true, 00:25:50.938 "num_base_bdevs": 2, 00:25:50.938 "num_base_bdevs_discovered": 2, 00:25:50.938 "num_base_bdevs_operational": 2, 00:25:50.938 "base_bdevs_list": [ 00:25:50.938 { 00:25:50.938 "name": "spare", 00:25:50.938 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:50.938 "is_configured": true, 00:25:50.938 "data_offset": 256, 00:25:50.938 "data_size": 7936 00:25:50.938 }, 00:25:50.938 { 00:25:50.938 "name": "BaseBdev2", 00:25:50.938 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:50.938 "is_configured": true, 00:25:50.938 "data_offset": 256, 00:25:50.938 "data_size": 7936 00:25:50.938 } 00:25:50.938 ] 00:25:50.938 }' 00:25:50.938 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.938 06:20:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:51.506 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:51.506 [2024-08-13 06:20:53.287271] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:51.506 [2024-08-13 06:20:53.287302] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:51.506 [2024-08-13 06:20:53.287374] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.506 [2024-08-13 06:20:53.287443] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.506 [2024-08-13 06:20:53.287452] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:51.765 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:52.025 /dev/nbd0 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.025 1+0 records in 00:25:52.025 1+0 records out 00:25:52.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034158 s, 12.0 MB/s 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:52.025 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:52.284 /dev/nbd1 00:25:52.284 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:52.284 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:52.284 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:25:52.284 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:25:52.284 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.285 1+0 records in 00:25:52.285 1+0 records out 00:25:52.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271387 s, 15.1 MB/s 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:52.285 06:20:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.285 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.545 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:52.804 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:53.063 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:53.322 [2024-08-13 06:20:54.864627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:53.322 [2024-08-13 06:20:54.864675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.322 [2024-08-13 06:20:54.864694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:53.322 [2024-08-13 06:20:54.864702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.322 [2024-08-13 06:20:54.866498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.322 [2024-08-13 06:20:54.866534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:53.322 [2024-08-13 06:20:54.866585] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:53.322 [2024-08-13 06:20:54.866631] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:53.322 [2024-08-13 06:20:54.866753] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:53.322 spare 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.322 06:20:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.322 [2024-08-13 06:20:54.966632] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:25:53.322 [2024-08-13 06:20:54.966664] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:53.322 [2024-08-13 06:20:54.966783] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:25:53.322 [2024-08-13 06:20:54.966891] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:25:53.322 [2024-08-13 06:20:54.966903] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:25:53.322 [2024-08-13 06:20:54.966967] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.322 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.322 "name": "raid_bdev1", 00:25:53.322 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:53.322 "strip_size_kb": 0, 00:25:53.322 "state": "online", 00:25:53.322 "raid_level": "raid1", 00:25:53.322 "superblock": true, 00:25:53.322 "num_base_bdevs": 2, 00:25:53.322 "num_base_bdevs_discovered": 2, 00:25:53.322 "num_base_bdevs_operational": 2, 00:25:53.322 "base_bdevs_list": [ 00:25:53.322 { 00:25:53.322 "name": "spare", 00:25:53.322 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:53.322 "is_configured": true, 00:25:53.322 "data_offset": 256, 00:25:53.322 "data_size": 7936 00:25:53.322 }, 00:25:53.322 { 00:25:53.322 "name": "BaseBdev2", 00:25:53.322 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:53.322 "is_configured": true, 00:25:53.322 "data_offset": 256, 00:25:53.322 "data_size": 7936 00:25:53.322 } 00:25:53.322 ] 00:25:53.322 }' 00:25:53.322 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.322 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.890 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.149 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:54.149 "name": "raid_bdev1", 00:25:54.149 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:54.149 "strip_size_kb": 0, 00:25:54.149 "state": "online", 00:25:54.149 "raid_level": "raid1", 00:25:54.149 "superblock": true, 00:25:54.149 "num_base_bdevs": 2, 00:25:54.149 "num_base_bdevs_discovered": 2, 00:25:54.149 "num_base_bdevs_operational": 2, 00:25:54.149 "base_bdevs_list": [ 00:25:54.149 { 00:25:54.149 "name": "spare", 00:25:54.149 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:54.149 "is_configured": true, 00:25:54.149 "data_offset": 256, 00:25:54.149 "data_size": 7936 00:25:54.149 }, 00:25:54.149 { 00:25:54.149 "name": "BaseBdev2", 00:25:54.149 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:54.149 "is_configured": true, 00:25:54.149 "data_offset": 256, 00:25:54.149 "data_size": 7936 00:25:54.149 } 00:25:54.149 ] 00:25:54.149 }' 00:25:54.149 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:54.149 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:54.149 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:54.408 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:54.408 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.408 06:20:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:54.408 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.408 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:54.668 [2024-08-13 06:20:56.350170] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.668 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.927 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.927 "name": "raid_bdev1", 00:25:54.927 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:54.927 "strip_size_kb": 0, 00:25:54.927 "state": "online", 00:25:54.927 "raid_level": "raid1", 00:25:54.927 "superblock": true, 00:25:54.927 "num_base_bdevs": 2, 00:25:54.927 "num_base_bdevs_discovered": 1, 00:25:54.927 "num_base_bdevs_operational": 1, 00:25:54.927 "base_bdevs_list": [ 00:25:54.927 { 00:25:54.927 "name": null, 00:25:54.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.927 "is_configured": false, 00:25:54.927 "data_offset": 256, 00:25:54.927 "data_size": 7936 00:25:54.927 }, 00:25:54.927 { 00:25:54.927 "name": "BaseBdev2", 00:25:54.927 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:54.927 "is_configured": true, 00:25:54.927 "data_offset": 256, 00:25:54.927 "data_size": 7936 00:25:54.927 } 00:25:54.927 ] 00:25:54.927 }' 00:25:54.927 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.927 06:20:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:55.495 06:20:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:55.495 [2024-08-13 06:20:57.280635] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:55.495 [2024-08-13 06:20:57.280833] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:55.495 [2024-08-13 06:20:57.280851] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:55.495 [2024-08-13 06:20:57.280887] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:55.495 [2024-08-13 06:20:57.282556] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:25:55.495 [2024-08-13 06:20:57.284246] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:55.754 06:20:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:56.764 "name": "raid_bdev1", 00:25:56.764 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:56.764 "strip_size_kb": 0, 00:25:56.764 "state": "online", 00:25:56.764 "raid_level": "raid1", 00:25:56.764 "superblock": true, 00:25:56.764 "num_base_bdevs": 2, 00:25:56.764 "num_base_bdevs_discovered": 2, 00:25:56.764 "num_base_bdevs_operational": 2, 00:25:56.764 "process": { 00:25:56.764 "type": "rebuild", 00:25:56.764 "target": "spare", 00:25:56.764 "progress": { 00:25:56.764 "blocks": 2816, 00:25:56.764 "percent": 35 00:25:56.764 } 00:25:56.764 }, 00:25:56.764 "base_bdevs_list": [ 00:25:56.764 { 00:25:56.764 "name": "spare", 00:25:56.764 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:56.764 "is_configured": true, 00:25:56.764 "data_offset": 256, 00:25:56.764 "data_size": 7936 00:25:56.764 }, 00:25:56.764 { 00:25:56.764 "name": "BaseBdev2", 00:25:56.764 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:56.764 "is_configured": true, 00:25:56.764 "data_offset": 256, 00:25:56.764 "data_size": 7936 00:25:56.764 } 00:25:56.764 ] 00:25:56.764 }' 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:56.764 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:57.024 [2024-08-13 06:20:58.775111] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.024 [2024-08-13 06:20:58.789232] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:57.024 [2024-08-13 06:20:58.789285] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.024 [2024-08-13 06:20:58.789298] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.024 [2024-08-13 06:20:58.789309] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.024 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.283 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.283 06:20:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.283 06:20:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.283 "name": "raid_bdev1", 00:25:57.283 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:57.283 "strip_size_kb": 0, 00:25:57.283 "state": "online", 00:25:57.283 "raid_level": "raid1", 00:25:57.283 "superblock": true, 00:25:57.283 "num_base_bdevs": 2, 00:25:57.283 "num_base_bdevs_discovered": 1, 00:25:57.283 "num_base_bdevs_operational": 1, 00:25:57.283 "base_bdevs_list": [ 00:25:57.283 { 00:25:57.283 "name": null, 00:25:57.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.283 "is_configured": false, 00:25:57.283 "data_offset": 256, 00:25:57.283 "data_size": 7936 00:25:57.283 }, 00:25:57.283 { 00:25:57.283 "name": "BaseBdev2", 00:25:57.283 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:57.283 "is_configured": true, 00:25:57.283 "data_offset": 256, 00:25:57.283 "data_size": 7936 00:25:57.283 } 00:25:57.283 ] 00:25:57.283 }' 00:25:57.283 06:20:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.283 06:20:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:57.853 06:20:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:58.113 [2024-08-13 06:20:59.722397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:58.113 [2024-08-13 06:20:59.722460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.113 [2024-08-13 06:20:59.722493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:58.113 [2024-08-13 06:20:59.722505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.113 [2024-08-13 06:20:59.722720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.113 [2024-08-13 06:20:59.722737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:58.113 [2024-08-13 06:20:59.722795] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:58.113 [2024-08-13 06:20:59.722809] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:58.113 [2024-08-13 06:20:59.722818] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:58.113 [2024-08-13 06:20:59.722842] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.113 [2024-08-13 06:20:59.724501] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:25:58.113 [2024-08-13 06:20:59.726230] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:58.113 spare 00:25:58.113 06:20:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:59.052 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.052 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:59.052 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:59.052 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:59.053 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:59.053 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.053 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.312 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:59.312 "name": "raid_bdev1", 00:25:59.312 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:59.312 "strip_size_kb": 0, 00:25:59.312 "state": "online", 00:25:59.312 "raid_level": "raid1", 00:25:59.312 "superblock": true, 00:25:59.312 "num_base_bdevs": 2, 00:25:59.313 "num_base_bdevs_discovered": 2, 00:25:59.313 "num_base_bdevs_operational": 2, 00:25:59.313 "process": { 00:25:59.313 "type": "rebuild", 00:25:59.313 "target": "spare", 00:25:59.313 "progress": { 00:25:59.313 "blocks": 2816, 00:25:59.313 "percent": 35 00:25:59.313 } 00:25:59.313 }, 00:25:59.313 "base_bdevs_list": [ 00:25:59.313 { 00:25:59.313 "name": "spare", 00:25:59.313 "uuid": "8a2961a3-cc5a-56b9-a9cd-67464c3a5035", 00:25:59.313 "is_configured": true, 00:25:59.313 "data_offset": 256, 00:25:59.313 "data_size": 7936 00:25:59.313 }, 00:25:59.313 { 00:25:59.313 "name": "BaseBdev2", 00:25:59.313 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:59.313 "is_configured": true, 00:25:59.313 "data_offset": 256, 00:25:59.313 "data_size": 7936 00:25:59.313 } 00:25:59.313 ] 00:25:59.313 }' 00:25:59.313 06:21:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:59.313 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.313 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:59.313 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:59.313 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:59.573 [2024-08-13 06:21:01.208579] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.573 [2024-08-13 06:21:01.231322] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:59.573 [2024-08-13 06:21:01.231371] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.573 [2024-08-13 06:21:01.231386] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.573 [2024-08-13 06:21:01.231393] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.573 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.832 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.832 "name": "raid_bdev1", 00:25:59.832 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:25:59.832 "strip_size_kb": 0, 00:25:59.833 "state": "online", 00:25:59.833 "raid_level": "raid1", 00:25:59.833 "superblock": true, 00:25:59.833 "num_base_bdevs": 2, 00:25:59.833 "num_base_bdevs_discovered": 1, 00:25:59.833 "num_base_bdevs_operational": 1, 00:25:59.833 "base_bdevs_list": [ 00:25:59.833 { 00:25:59.833 "name": null, 00:25:59.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.833 "is_configured": false, 00:25:59.833 "data_offset": 256, 00:25:59.833 "data_size": 7936 00:25:59.833 }, 00:25:59.833 { 00:25:59.833 "name": "BaseBdev2", 00:25:59.833 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:25:59.833 "is_configured": true, 00:25:59.833 "data_offset": 256, 00:25:59.833 "data_size": 7936 00:25:59.833 } 00:25:59.833 ] 00:25:59.833 }' 00:25:59.833 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.833 06:21:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.402 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.663 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:00.663 "name": "raid_bdev1", 00:26:00.663 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:26:00.663 "strip_size_kb": 0, 00:26:00.663 "state": "online", 00:26:00.663 "raid_level": "raid1", 00:26:00.663 "superblock": true, 00:26:00.663 "num_base_bdevs": 2, 00:26:00.663 "num_base_bdevs_discovered": 1, 00:26:00.663 "num_base_bdevs_operational": 1, 00:26:00.663 "base_bdevs_list": [ 00:26:00.663 { 00:26:00.663 "name": null, 00:26:00.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.663 "is_configured": false, 00:26:00.663 "data_offset": 256, 00:26:00.663 "data_size": 7936 00:26:00.663 }, 00:26:00.663 { 00:26:00.663 "name": "BaseBdev2", 00:26:00.663 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:26:00.663 "is_configured": true, 00:26:00.663 "data_offset": 256, 00:26:00.663 "data_size": 7936 00:26:00.663 } 00:26:00.663 ] 00:26:00.663 }' 00:26:00.663 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:00.663 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:00.663 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:00.663 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:00.663 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:00.922 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:00.922 [2024-08-13 06:21:02.675841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:00.922 [2024-08-13 06:21:02.675895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.922 [2024-08-13 06:21:02.675916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:00.922 [2024-08-13 06:21:02.675926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.922 [2024-08-13 06:21:02.676130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.923 [2024-08-13 06:21:02.676145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:00.923 [2024-08-13 06:21:02.676204] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:00.923 [2024-08-13 06:21:02.676217] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:00.923 [2024-08-13 06:21:02.676229] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:00.923 BaseBdev1 00:26:00.923 06:21:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.303 "name": "raid_bdev1", 00:26:02.303 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:26:02.303 "strip_size_kb": 0, 00:26:02.303 "state": "online", 00:26:02.303 "raid_level": "raid1", 00:26:02.303 "superblock": true, 00:26:02.303 "num_base_bdevs": 2, 00:26:02.303 "num_base_bdevs_discovered": 1, 00:26:02.303 "num_base_bdevs_operational": 1, 00:26:02.303 "base_bdevs_list": [ 00:26:02.303 { 00:26:02.303 "name": null, 00:26:02.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.303 "is_configured": false, 00:26:02.303 "data_offset": 256, 00:26:02.303 "data_size": 7936 00:26:02.303 }, 00:26:02.303 { 00:26:02.303 "name": "BaseBdev2", 00:26:02.303 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:26:02.303 "is_configured": true, 00:26:02.303 "data_offset": 256, 00:26:02.303 "data_size": 7936 00:26:02.303 } 00:26:02.303 ] 00:26:02.303 }' 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.303 06:21:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.873 "name": "raid_bdev1", 00:26:02.873 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:26:02.873 "strip_size_kb": 0, 00:26:02.873 "state": "online", 00:26:02.873 "raid_level": "raid1", 00:26:02.873 "superblock": true, 00:26:02.873 "num_base_bdevs": 2, 00:26:02.873 "num_base_bdevs_discovered": 1, 00:26:02.873 "num_base_bdevs_operational": 1, 00:26:02.873 "base_bdevs_list": [ 00:26:02.873 { 00:26:02.873 "name": null, 00:26:02.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.873 "is_configured": false, 00:26:02.873 "data_offset": 256, 00:26:02.873 "data_size": 7936 00:26:02.873 }, 00:26:02.873 { 00:26:02.873 "name": "BaseBdev2", 00:26:02.873 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:26:02.873 "is_configured": true, 00:26:02.873 "data_offset": 256, 00:26:02.873 "data_size": 7936 00:26:02.873 } 00:26:02.873 ] 00:26:02.873 }' 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:02.873 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:03.133 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.133 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:03.133 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:03.133 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@646 -- # local es=0 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:03.134 [2024-08-13 06:21:04.880154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:03.134 [2024-08-13 06:21:04.880299] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:03.134 [2024-08-13 06:21:04.880310] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:03.134 request: 00:26:03.134 { 00:26:03.134 "base_bdev": "BaseBdev1", 00:26:03.134 "raid_bdev": "raid_bdev1", 00:26:03.134 "method": "bdev_raid_add_base_bdev", 00:26:03.134 "req_id": 1 00:26:03.134 } 00:26:03.134 Got JSON-RPC error response 00:26:03.134 response: 00:26:03.134 { 00:26:03.134 "code": -22, 00:26:03.134 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:03.134 } 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@649 -- # es=1 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:26:03.134 06:21:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.515 06:21:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.515 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.515 "name": "raid_bdev1", 00:26:04.515 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:26:04.515 "strip_size_kb": 0, 00:26:04.515 "state": "online", 00:26:04.515 "raid_level": "raid1", 00:26:04.515 "superblock": true, 00:26:04.515 "num_base_bdevs": 2, 00:26:04.515 "num_base_bdevs_discovered": 1, 00:26:04.515 "num_base_bdevs_operational": 1, 00:26:04.515 "base_bdevs_list": [ 00:26:04.515 { 00:26:04.515 "name": null, 00:26:04.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.515 "is_configured": false, 00:26:04.515 "data_offset": 256, 00:26:04.515 "data_size": 7936 00:26:04.515 }, 00:26:04.515 { 00:26:04.515 "name": "BaseBdev2", 00:26:04.515 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:26:04.515 "is_configured": true, 00:26:04.515 "data_offset": 256, 00:26:04.515 "data_size": 7936 00:26:04.515 } 00:26:04.515 ] 00:26:04.515 }' 00:26:04.515 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.515 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:05.085 "name": "raid_bdev1", 00:26:05.085 "uuid": "653ceac9-9b8c-4dd2-be0d-3a5558a48957", 00:26:05.085 "strip_size_kb": 0, 00:26:05.085 "state": "online", 00:26:05.085 "raid_level": "raid1", 00:26:05.085 "superblock": true, 00:26:05.085 "num_base_bdevs": 2, 00:26:05.085 "num_base_bdevs_discovered": 1, 00:26:05.085 "num_base_bdevs_operational": 1, 00:26:05.085 "base_bdevs_list": [ 00:26:05.085 { 00:26:05.085 "name": null, 00:26:05.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.085 "is_configured": false, 00:26:05.085 "data_offset": 256, 00:26:05.085 "data_size": 7936 00:26:05.085 }, 00:26:05.085 { 00:26:05.085 "name": "BaseBdev2", 00:26:05.085 "uuid": "83c2df30-31c3-5878-9594-8b044b381c82", 00:26:05.085 "is_configured": true, 00:26:05.085 "data_offset": 256, 00:26:05.085 "data_size": 7936 00:26:05.085 } 00:26:05.085 ] 00:26:05.085 }' 00:26:05.085 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 107634 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 107634 ']' 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 107634 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107634 00:26:05.345 killing process with pid 107634 00:26:05.345 Received shutdown signal, test time was about 60.000000 seconds 00:26:05.345 00:26:05.345 Latency(us) 00:26:05.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.345 =================================================================================================================== 00:26:05.345 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107634' 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 107634 00:26:05.345 [2024-08-13 06:21:06.989841] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:05.345 [2024-08-13 06:21:06.989968] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.345 06:21:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 107634 00:26:05.345 [2024-08-13 06:21:06.990020] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.345 [2024-08-13 06:21:06.990051] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:26:05.345 [2024-08-13 06:21:07.023441] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:05.605 ************************************ 00:26:05.605 END TEST raid_rebuild_test_sb_md_separate 00:26:05.605 ************************************ 00:26:05.605 06:21:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:26:05.605 00:26:05.605 real 0m27.872s 00:26:05.605 user 0m43.238s 00:26:05.605 sys 0m4.007s 00:26:05.605 06:21:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:05.605 06:21:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 06:21:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # base_malloc_params='-m 32 -i' 00:26:05.605 06:21:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:26:05.605 06:21:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:05.605 06:21:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:05.605 06:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 ************************************ 00:26:05.605 START TEST raid_state_function_test_sb_md_interleaved 00:26:05.605 ************************************ 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:05.605 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=108426 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:05.606 Process raid pid: 108426 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 108426' 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 108426 /var/tmp/spdk-raid.sock 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 108426 ']' 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:05.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:05.606 06:21:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.866 [2024-08-13 06:21:07.415918] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:26:05.866 [2024-08-13 06:21:07.416076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.866 [2024-08-13 06:21:07.563724] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.866 [2024-08-13 06:21:07.611077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.866 [2024-08-13 06:21:07.654315] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:05.866 [2024-08-13 06:21:07.654356] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.435 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:06.435 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:26:06.435 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:06.695 [2024-08-13 06:21:08.370537] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:06.695 [2024-08-13 06:21:08.370587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:06.695 [2024-08-13 06:21:08.370607] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:06.695 [2024-08-13 06:21:08.370615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.695 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.955 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.955 "name": "Existed_Raid", 00:26:06.955 "uuid": "0a5ae700-3666-478e-8fae-d4524ad403f5", 00:26:06.955 "strip_size_kb": 0, 00:26:06.955 "state": "configuring", 00:26:06.955 "raid_level": "raid1", 00:26:06.955 "superblock": true, 00:26:06.955 "num_base_bdevs": 2, 00:26:06.955 "num_base_bdevs_discovered": 0, 00:26:06.955 "num_base_bdevs_operational": 2, 00:26:06.955 "base_bdevs_list": [ 00:26:06.955 { 00:26:06.955 "name": "BaseBdev1", 00:26:06.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.955 "is_configured": false, 00:26:06.955 "data_offset": 0, 00:26:06.955 "data_size": 0 00:26:06.955 }, 00:26:06.955 { 00:26:06.955 "name": "BaseBdev2", 00:26:06.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.955 "is_configured": false, 00:26:06.955 "data_offset": 0, 00:26:06.955 "data_size": 0 00:26:06.955 } 00:26:06.955 ] 00:26:06.955 }' 00:26:06.955 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.955 06:21:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:07.524 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:07.784 [2024-08-13 06:21:09.328789] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:07.784 [2024-08-13 06:21:09.328831] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:26:07.784 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:07.784 [2024-08-13 06:21:09.492522] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:07.784 [2024-08-13 06:21:09.492569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:07.784 [2024-08-13 06:21:09.492587] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:07.784 [2024-08-13 06:21:09.492595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:07.784 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:26:08.044 [2024-08-13 06:21:09.700958] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.044 BaseBdev1 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:08.044 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:08.304 06:21:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:08.304 [ 00:26:08.304 { 00:26:08.304 "name": "BaseBdev1", 00:26:08.304 "aliases": [ 00:26:08.304 "4554b88b-6972-4ce2-83bf-436f8976efd0" 00:26:08.304 ], 00:26:08.304 "product_name": "Malloc disk", 00:26:08.304 "block_size": 4128, 00:26:08.304 "num_blocks": 8192, 00:26:08.304 "uuid": "4554b88b-6972-4ce2-83bf-436f8976efd0", 00:26:08.304 "md_size": 32, 00:26:08.304 "md_interleave": true, 00:26:08.304 "dif_type": 0, 00:26:08.304 "assigned_rate_limits": { 00:26:08.304 "rw_ios_per_sec": 0, 00:26:08.304 "rw_mbytes_per_sec": 0, 00:26:08.304 "r_mbytes_per_sec": 0, 00:26:08.304 "w_mbytes_per_sec": 0 00:26:08.304 }, 00:26:08.304 "claimed": true, 00:26:08.304 "claim_type": "exclusive_write", 00:26:08.304 "zoned": false, 00:26:08.304 "supported_io_types": { 00:26:08.304 "read": true, 00:26:08.304 "write": true, 00:26:08.304 "unmap": true, 00:26:08.304 "flush": true, 00:26:08.304 "reset": true, 00:26:08.304 "nvme_admin": false, 00:26:08.304 "nvme_io": false, 00:26:08.304 "nvme_io_md": false, 00:26:08.304 "write_zeroes": true, 00:26:08.304 "zcopy": true, 00:26:08.304 "get_zone_info": false, 00:26:08.304 "zone_management": false, 00:26:08.304 "zone_append": false, 00:26:08.304 "compare": false, 00:26:08.304 "compare_and_write": false, 00:26:08.304 "abort": true, 00:26:08.304 "seek_hole": false, 00:26:08.304 "seek_data": false, 00:26:08.304 "copy": true, 00:26:08.304 "nvme_iov_md": false 00:26:08.304 }, 00:26:08.304 "memory_domains": [ 00:26:08.304 { 00:26:08.304 "dma_device_id": "system", 00:26:08.304 "dma_device_type": 1 00:26:08.304 }, 00:26:08.304 { 00:26:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.304 "dma_device_type": 2 00:26:08.304 } 00:26:08.304 ], 00:26:08.304 "driver_specific": {} 00:26:08.304 } 00:26:08.304 ] 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.304 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.564 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.564 "name": "Existed_Raid", 00:26:08.564 "uuid": "baff4937-f7dc-461a-a5bd-054f2d144914", 00:26:08.564 "strip_size_kb": 0, 00:26:08.564 "state": "configuring", 00:26:08.564 "raid_level": "raid1", 00:26:08.564 "superblock": true, 00:26:08.564 "num_base_bdevs": 2, 00:26:08.564 "num_base_bdevs_discovered": 1, 00:26:08.564 "num_base_bdevs_operational": 2, 00:26:08.564 "base_bdevs_list": [ 00:26:08.564 { 00:26:08.564 "name": "BaseBdev1", 00:26:08.564 "uuid": "4554b88b-6972-4ce2-83bf-436f8976efd0", 00:26:08.564 "is_configured": true, 00:26:08.564 "data_offset": 256, 00:26:08.564 "data_size": 7936 00:26:08.564 }, 00:26:08.564 { 00:26:08.564 "name": "BaseBdev2", 00:26:08.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.564 "is_configured": false, 00:26:08.564 "data_offset": 0, 00:26:08.564 "data_size": 0 00:26:08.564 } 00:26:08.564 ] 00:26:08.564 }' 00:26:08.564 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.564 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.134 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:09.393 [2024-08-13 06:21:10.962822] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:09.393 [2024-08-13 06:21:10.962874] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:26:09.393 06:21:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:09.393 [2024-08-13 06:21:11.166501] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:09.393 [2024-08-13 06:21:11.168145] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:09.393 [2024-08-13 06:21:11.168180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.393 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.653 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.653 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.653 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.653 "name": "Existed_Raid", 00:26:09.653 "uuid": "af050d55-0f01-4789-9e8b-02426bdd72f1", 00:26:09.653 "strip_size_kb": 0, 00:26:09.653 "state": "configuring", 00:26:09.653 "raid_level": "raid1", 00:26:09.653 "superblock": true, 00:26:09.653 "num_base_bdevs": 2, 00:26:09.653 "num_base_bdevs_discovered": 1, 00:26:09.653 "num_base_bdevs_operational": 2, 00:26:09.653 "base_bdevs_list": [ 00:26:09.653 { 00:26:09.653 "name": "BaseBdev1", 00:26:09.653 "uuid": "4554b88b-6972-4ce2-83bf-436f8976efd0", 00:26:09.653 "is_configured": true, 00:26:09.653 "data_offset": 256, 00:26:09.653 "data_size": 7936 00:26:09.653 }, 00:26:09.653 { 00:26:09.653 "name": "BaseBdev2", 00:26:09.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.653 "is_configured": false, 00:26:09.653 "data_offset": 0, 00:26:09.653 "data_size": 0 00:26:09.653 } 00:26:09.653 ] 00:26:09.653 }' 00:26:09.653 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.653 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.222 06:21:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:26:10.482 [2024-08-13 06:21:12.104089] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:10.482 [2024-08-13 06:21:12.104616] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:26:10.482 [2024-08-13 06:21:12.104683] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:10.482 [2024-08-13 06:21:12.104960] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:26:10.482 [2024-08-13 06:21:12.105329] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:26:10.482 [2024-08-13 06:21:12.105383] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:26:10.482 BaseBdev2 00:26:10.482 [2024-08-13 06:21:12.105599] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:10.482 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:10.742 [ 00:26:10.742 { 00:26:10.742 "name": "BaseBdev2", 00:26:10.742 "aliases": [ 00:26:10.742 "01cc2b67-45d0-45bb-b25b-9b987051cca1" 00:26:10.742 ], 00:26:10.742 "product_name": "Malloc disk", 00:26:10.742 "block_size": 4128, 00:26:10.742 "num_blocks": 8192, 00:26:10.742 "uuid": "01cc2b67-45d0-45bb-b25b-9b987051cca1", 00:26:10.742 "md_size": 32, 00:26:10.742 "md_interleave": true, 00:26:10.742 "dif_type": 0, 00:26:10.742 "assigned_rate_limits": { 00:26:10.742 "rw_ios_per_sec": 0, 00:26:10.742 "rw_mbytes_per_sec": 0, 00:26:10.742 "r_mbytes_per_sec": 0, 00:26:10.742 "w_mbytes_per_sec": 0 00:26:10.742 }, 00:26:10.742 "claimed": true, 00:26:10.742 "claim_type": "exclusive_write", 00:26:10.742 "zoned": false, 00:26:10.742 "supported_io_types": { 00:26:10.742 "read": true, 00:26:10.742 "write": true, 00:26:10.742 "unmap": true, 00:26:10.742 "flush": true, 00:26:10.742 "reset": true, 00:26:10.742 "nvme_admin": false, 00:26:10.742 "nvme_io": false, 00:26:10.742 "nvme_io_md": false, 00:26:10.742 "write_zeroes": true, 00:26:10.742 "zcopy": true, 00:26:10.742 "get_zone_info": false, 00:26:10.742 "zone_management": false, 00:26:10.742 "zone_append": false, 00:26:10.742 "compare": false, 00:26:10.742 "compare_and_write": false, 00:26:10.742 "abort": true, 00:26:10.742 "seek_hole": false, 00:26:10.742 "seek_data": false, 00:26:10.742 "copy": true, 00:26:10.742 "nvme_iov_md": false 00:26:10.742 }, 00:26:10.742 "memory_domains": [ 00:26:10.742 { 00:26:10.742 "dma_device_id": "system", 00:26:10.742 "dma_device_type": 1 00:26:10.742 }, 00:26:10.742 { 00:26:10.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.742 "dma_device_type": 2 00:26:10.742 } 00:26:10.742 ], 00:26:10.742 "driver_specific": {} 00:26:10.742 } 00:26:10.742 ] 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:10.742 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:10.743 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.743 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.002 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.002 "name": "Existed_Raid", 00:26:11.002 "uuid": "af050d55-0f01-4789-9e8b-02426bdd72f1", 00:26:11.002 "strip_size_kb": 0, 00:26:11.002 "state": "online", 00:26:11.002 "raid_level": "raid1", 00:26:11.002 "superblock": true, 00:26:11.002 "num_base_bdevs": 2, 00:26:11.002 "num_base_bdevs_discovered": 2, 00:26:11.002 "num_base_bdevs_operational": 2, 00:26:11.002 "base_bdevs_list": [ 00:26:11.002 { 00:26:11.002 "name": "BaseBdev1", 00:26:11.002 "uuid": "4554b88b-6972-4ce2-83bf-436f8976efd0", 00:26:11.002 "is_configured": true, 00:26:11.002 "data_offset": 256, 00:26:11.002 "data_size": 7936 00:26:11.002 }, 00:26:11.002 { 00:26:11.002 "name": "BaseBdev2", 00:26:11.002 "uuid": "01cc2b67-45d0-45bb-b25b-9b987051cca1", 00:26:11.002 "is_configured": true, 00:26:11.002 "data_offset": 256, 00:26:11.002 "data_size": 7936 00:26:11.002 } 00:26:11.002 ] 00:26:11.002 }' 00:26:11.002 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.002 06:21:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:11.572 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:11.832 [2024-08-13 06:21:13.430219] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:11.832 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:11.832 "name": "Existed_Raid", 00:26:11.832 "aliases": [ 00:26:11.832 "af050d55-0f01-4789-9e8b-02426bdd72f1" 00:26:11.832 ], 00:26:11.832 "product_name": "Raid Volume", 00:26:11.832 "block_size": 4128, 00:26:11.832 "num_blocks": 7936, 00:26:11.832 "uuid": "af050d55-0f01-4789-9e8b-02426bdd72f1", 00:26:11.832 "md_size": 32, 00:26:11.832 "md_interleave": true, 00:26:11.832 "dif_type": 0, 00:26:11.832 "assigned_rate_limits": { 00:26:11.832 "rw_ios_per_sec": 0, 00:26:11.832 "rw_mbytes_per_sec": 0, 00:26:11.832 "r_mbytes_per_sec": 0, 00:26:11.832 "w_mbytes_per_sec": 0 00:26:11.832 }, 00:26:11.832 "claimed": false, 00:26:11.832 "zoned": false, 00:26:11.832 "supported_io_types": { 00:26:11.832 "read": true, 00:26:11.832 "write": true, 00:26:11.832 "unmap": false, 00:26:11.832 "flush": false, 00:26:11.832 "reset": true, 00:26:11.832 "nvme_admin": false, 00:26:11.832 "nvme_io": false, 00:26:11.832 "nvme_io_md": false, 00:26:11.832 "write_zeroes": true, 00:26:11.832 "zcopy": false, 00:26:11.832 "get_zone_info": false, 00:26:11.832 "zone_management": false, 00:26:11.832 "zone_append": false, 00:26:11.832 "compare": false, 00:26:11.832 "compare_and_write": false, 00:26:11.832 "abort": false, 00:26:11.832 "seek_hole": false, 00:26:11.832 "seek_data": false, 00:26:11.832 "copy": false, 00:26:11.832 "nvme_iov_md": false 00:26:11.832 }, 00:26:11.832 "memory_domains": [ 00:26:11.832 { 00:26:11.832 "dma_device_id": "system", 00:26:11.832 "dma_device_type": 1 00:26:11.832 }, 00:26:11.832 { 00:26:11.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.832 "dma_device_type": 2 00:26:11.832 }, 00:26:11.832 { 00:26:11.832 "dma_device_id": "system", 00:26:11.832 "dma_device_type": 1 00:26:11.832 }, 00:26:11.832 { 00:26:11.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.832 "dma_device_type": 2 00:26:11.832 } 00:26:11.832 ], 00:26:11.832 "driver_specific": { 00:26:11.832 "raid": { 00:26:11.832 "uuid": "af050d55-0f01-4789-9e8b-02426bdd72f1", 00:26:11.832 "strip_size_kb": 0, 00:26:11.832 "state": "online", 00:26:11.832 "raid_level": "raid1", 00:26:11.832 "superblock": true, 00:26:11.832 "num_base_bdevs": 2, 00:26:11.832 "num_base_bdevs_discovered": 2, 00:26:11.832 "num_base_bdevs_operational": 2, 00:26:11.832 "base_bdevs_list": [ 00:26:11.832 { 00:26:11.832 "name": "BaseBdev1", 00:26:11.832 "uuid": "4554b88b-6972-4ce2-83bf-436f8976efd0", 00:26:11.832 "is_configured": true, 00:26:11.832 "data_offset": 256, 00:26:11.832 "data_size": 7936 00:26:11.832 }, 00:26:11.832 { 00:26:11.832 "name": "BaseBdev2", 00:26:11.832 "uuid": "01cc2b67-45d0-45bb-b25b-9b987051cca1", 00:26:11.832 "is_configured": true, 00:26:11.832 "data_offset": 256, 00:26:11.832 "data_size": 7936 00:26:11.832 } 00:26:11.832 ] 00:26:11.832 } 00:26:11.832 } 00:26:11.832 }' 00:26:11.832 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:11.832 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:11.832 BaseBdev2' 00:26:11.832 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:11.832 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:11.832 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:12.091 "name": "BaseBdev1", 00:26:12.091 "aliases": [ 00:26:12.091 "4554b88b-6972-4ce2-83bf-436f8976efd0" 00:26:12.091 ], 00:26:12.091 "product_name": "Malloc disk", 00:26:12.091 "block_size": 4128, 00:26:12.091 "num_blocks": 8192, 00:26:12.091 "uuid": "4554b88b-6972-4ce2-83bf-436f8976efd0", 00:26:12.091 "md_size": 32, 00:26:12.091 "md_interleave": true, 00:26:12.091 "dif_type": 0, 00:26:12.091 "assigned_rate_limits": { 00:26:12.091 "rw_ios_per_sec": 0, 00:26:12.091 "rw_mbytes_per_sec": 0, 00:26:12.091 "r_mbytes_per_sec": 0, 00:26:12.091 "w_mbytes_per_sec": 0 00:26:12.091 }, 00:26:12.091 "claimed": true, 00:26:12.091 "claim_type": "exclusive_write", 00:26:12.091 "zoned": false, 00:26:12.091 "supported_io_types": { 00:26:12.091 "read": true, 00:26:12.091 "write": true, 00:26:12.091 "unmap": true, 00:26:12.091 "flush": true, 00:26:12.091 "reset": true, 00:26:12.091 "nvme_admin": false, 00:26:12.091 "nvme_io": false, 00:26:12.091 "nvme_io_md": false, 00:26:12.091 "write_zeroes": true, 00:26:12.091 "zcopy": true, 00:26:12.091 "get_zone_info": false, 00:26:12.091 "zone_management": false, 00:26:12.091 "zone_append": false, 00:26:12.091 "compare": false, 00:26:12.091 "compare_and_write": false, 00:26:12.091 "abort": true, 00:26:12.091 "seek_hole": false, 00:26:12.091 "seek_data": false, 00:26:12.091 "copy": true, 00:26:12.091 "nvme_iov_md": false 00:26:12.091 }, 00:26:12.091 "memory_domains": [ 00:26:12.091 { 00:26:12.091 "dma_device_id": "system", 00:26:12.091 "dma_device_type": 1 00:26:12.091 }, 00:26:12.091 { 00:26:12.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.091 "dma_device_type": 2 00:26:12.091 } 00:26:12.091 ], 00:26:12.091 "driver_specific": {} 00:26:12.091 }' 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:12.091 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:12.351 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:12.351 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:12.351 06:21:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:12.351 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:12.351 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:12.351 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:12.351 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:12.351 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:12.612 "name": "BaseBdev2", 00:26:12.612 "aliases": [ 00:26:12.612 "01cc2b67-45d0-45bb-b25b-9b987051cca1" 00:26:12.612 ], 00:26:12.612 "product_name": "Malloc disk", 00:26:12.612 "block_size": 4128, 00:26:12.612 "num_blocks": 8192, 00:26:12.612 "uuid": "01cc2b67-45d0-45bb-b25b-9b987051cca1", 00:26:12.612 "md_size": 32, 00:26:12.612 "md_interleave": true, 00:26:12.612 "dif_type": 0, 00:26:12.612 "assigned_rate_limits": { 00:26:12.612 "rw_ios_per_sec": 0, 00:26:12.612 "rw_mbytes_per_sec": 0, 00:26:12.612 "r_mbytes_per_sec": 0, 00:26:12.612 "w_mbytes_per_sec": 0 00:26:12.612 }, 00:26:12.612 "claimed": true, 00:26:12.612 "claim_type": "exclusive_write", 00:26:12.612 "zoned": false, 00:26:12.612 "supported_io_types": { 00:26:12.612 "read": true, 00:26:12.612 "write": true, 00:26:12.612 "unmap": true, 00:26:12.612 "flush": true, 00:26:12.612 "reset": true, 00:26:12.612 "nvme_admin": false, 00:26:12.612 "nvme_io": false, 00:26:12.612 "nvme_io_md": false, 00:26:12.612 "write_zeroes": true, 00:26:12.612 "zcopy": true, 00:26:12.612 "get_zone_info": false, 00:26:12.612 "zone_management": false, 00:26:12.612 "zone_append": false, 00:26:12.612 "compare": false, 00:26:12.612 "compare_and_write": false, 00:26:12.612 "abort": true, 00:26:12.612 "seek_hole": false, 00:26:12.612 "seek_data": false, 00:26:12.612 "copy": true, 00:26:12.612 "nvme_iov_md": false 00:26:12.612 }, 00:26:12.612 "memory_domains": [ 00:26:12.612 { 00:26:12.612 "dma_device_id": "system", 00:26:12.612 "dma_device_type": 1 00:26:12.612 }, 00:26:12.612 { 00:26:12.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.612 "dma_device_type": 2 00:26:12.612 } 00:26:12.612 ], 00:26:12.612 "driver_specific": {} 00:26:12.612 }' 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:12.612 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:12.871 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:12.871 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:12.871 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:12.871 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:12.871 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:13.130 [2024-08-13 06:21:14.703828] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:13.130 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:13.130 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:13.130 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:13.130 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:26:13.130 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:13.130 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.131 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.390 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.390 "name": "Existed_Raid", 00:26:13.390 "uuid": "af050d55-0f01-4789-9e8b-02426bdd72f1", 00:26:13.390 "strip_size_kb": 0, 00:26:13.390 "state": "online", 00:26:13.390 "raid_level": "raid1", 00:26:13.390 "superblock": true, 00:26:13.390 "num_base_bdevs": 2, 00:26:13.390 "num_base_bdevs_discovered": 1, 00:26:13.390 "num_base_bdevs_operational": 1, 00:26:13.390 "base_bdevs_list": [ 00:26:13.390 { 00:26:13.390 "name": null, 00:26:13.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.390 "is_configured": false, 00:26:13.390 "data_offset": 256, 00:26:13.390 "data_size": 7936 00:26:13.390 }, 00:26:13.390 { 00:26:13.390 "name": "BaseBdev2", 00:26:13.390 "uuid": "01cc2b67-45d0-45bb-b25b-9b987051cca1", 00:26:13.390 "is_configured": true, 00:26:13.390 "data_offset": 256, 00:26:13.390 "data_size": 7936 00:26:13.390 } 00:26:13.390 ] 00:26:13.390 }' 00:26:13.390 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.390 06:21:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:13.958 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:14.217 [2024-08-13 06:21:15.897384] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:14.218 [2024-08-13 06:21:15.897498] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:14.218 [2024-08-13 06:21:15.909011] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:14.218 [2024-08-13 06:21:15.909071] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:14.218 [2024-08-13 06:21:15.909081] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:26:14.218 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:14.218 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:14.218 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.218 06:21:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 108426 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 108426 ']' 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 108426 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108426 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:14.478 killing process with pid 108426 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108426' 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 108426 00:26:14.478 [2024-08-13 06:21:16.168581] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:14.478 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 108426 00:26:14.478 [2024-08-13 06:21:16.169539] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:14.738 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:26:14.738 00:26:14.738 real 0m9.093s 00:26:14.738 user 0m16.158s 00:26:14.738 sys 0m1.521s 00:26:14.738 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:14.738 06:21:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:14.738 ************************************ 00:26:14.738 END TEST raid_state_function_test_sb_md_interleaved 00:26:14.738 ************************************ 00:26:14.738 06:21:16 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:26:14.738 06:21:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:26:14.738 06:21:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:14.738 06:21:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.738 ************************************ 00:26:14.738 START TEST raid_superblock_test_md_interleaved 00:26:14.738 ************************************ 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=108755 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 108755 /var/tmp/spdk-raid.sock 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 108755 ']' 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:14.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:14.738 06:21:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:14.998 [2024-08-13 06:21:16.581722] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:26:14.998 [2024-08-13 06:21:16.581856] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108755 ] 00:26:14.998 [2024-08-13 06:21:16.721892] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.998 [2024-08-13 06:21:16.766662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.258 [2024-08-13 06:21:16.809883] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:15.258 [2024-08-13 06:21:16.809921] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:26:15.827 malloc1 00:26:15.827 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:16.086 [2024-08-13 06:21:17.754456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:16.086 [2024-08-13 06:21:17.754529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.086 [2024-08-13 06:21:17.754554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:16.086 [2024-08-13 06:21:17.754565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.086 [2024-08-13 06:21:17.756354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.086 [2024-08-13 06:21:17.756390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:16.086 pt1 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:16.086 06:21:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:26:16.345 malloc2 00:26:16.345 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:16.604 [2024-08-13 06:21:18.197630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:16.604 [2024-08-13 06:21:18.197687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.604 [2024-08-13 06:21:18.197704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:16.604 [2024-08-13 06:21:18.197712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.604 [2024-08-13 06:21:18.199424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.604 [2024-08-13 06:21:18.199460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:16.604 pt2 00:26:16.604 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:16.604 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:16.604 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:16.604 [2024-08-13 06:21:18.365361] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:16.604 [2024-08-13 06:21:18.366996] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:16.604 [2024-08-13 06:21:18.367183] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:26:16.604 [2024-08-13 06:21:18.367197] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:16.604 [2024-08-13 06:21:18.367281] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:26:16.604 [2024-08-13 06:21:18.367352] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:26:16.604 [2024-08-13 06:21:18.367380] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:26:16.604 [2024-08-13 06:21:18.367436] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.604 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.863 "name": "raid_bdev1", 00:26:16.863 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:16.863 "strip_size_kb": 0, 00:26:16.863 "state": "online", 00:26:16.863 "raid_level": "raid1", 00:26:16.863 "superblock": true, 00:26:16.863 "num_base_bdevs": 2, 00:26:16.863 "num_base_bdevs_discovered": 2, 00:26:16.863 "num_base_bdevs_operational": 2, 00:26:16.863 "base_bdevs_list": [ 00:26:16.863 { 00:26:16.863 "name": "pt1", 00:26:16.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:16.863 "is_configured": true, 00:26:16.863 "data_offset": 256, 00:26:16.863 "data_size": 7936 00:26:16.863 }, 00:26:16.863 { 00:26:16.863 "name": "pt2", 00:26:16.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.863 "is_configured": true, 00:26:16.863 "data_offset": 256, 00:26:16.863 "data_size": 7936 00:26:16.863 } 00:26:16.863 ] 00:26:16.863 }' 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.863 06:21:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:17.431 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:17.690 [2024-08-13 06:21:19.359918] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.690 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:17.690 "name": "raid_bdev1", 00:26:17.690 "aliases": [ 00:26:17.690 "8af2d4b1-baf3-4b00-a488-467028bd1748" 00:26:17.690 ], 00:26:17.690 "product_name": "Raid Volume", 00:26:17.690 "block_size": 4128, 00:26:17.690 "num_blocks": 7936, 00:26:17.690 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:17.690 "md_size": 32, 00:26:17.690 "md_interleave": true, 00:26:17.690 "dif_type": 0, 00:26:17.690 "assigned_rate_limits": { 00:26:17.690 "rw_ios_per_sec": 0, 00:26:17.690 "rw_mbytes_per_sec": 0, 00:26:17.690 "r_mbytes_per_sec": 0, 00:26:17.690 "w_mbytes_per_sec": 0 00:26:17.690 }, 00:26:17.690 "claimed": false, 00:26:17.690 "zoned": false, 00:26:17.690 "supported_io_types": { 00:26:17.690 "read": true, 00:26:17.690 "write": true, 00:26:17.690 "unmap": false, 00:26:17.690 "flush": false, 00:26:17.690 "reset": true, 00:26:17.690 "nvme_admin": false, 00:26:17.690 "nvme_io": false, 00:26:17.690 "nvme_io_md": false, 00:26:17.690 "write_zeroes": true, 00:26:17.690 "zcopy": false, 00:26:17.690 "get_zone_info": false, 00:26:17.690 "zone_management": false, 00:26:17.690 "zone_append": false, 00:26:17.690 "compare": false, 00:26:17.690 "compare_and_write": false, 00:26:17.690 "abort": false, 00:26:17.690 "seek_hole": false, 00:26:17.690 "seek_data": false, 00:26:17.690 "copy": false, 00:26:17.690 "nvme_iov_md": false 00:26:17.690 }, 00:26:17.690 "memory_domains": [ 00:26:17.690 { 00:26:17.690 "dma_device_id": "system", 00:26:17.690 "dma_device_type": 1 00:26:17.690 }, 00:26:17.690 { 00:26:17.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.690 "dma_device_type": 2 00:26:17.690 }, 00:26:17.690 { 00:26:17.690 "dma_device_id": "system", 00:26:17.690 "dma_device_type": 1 00:26:17.691 }, 00:26:17.691 { 00:26:17.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.691 "dma_device_type": 2 00:26:17.691 } 00:26:17.691 ], 00:26:17.691 "driver_specific": { 00:26:17.691 "raid": { 00:26:17.691 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:17.691 "strip_size_kb": 0, 00:26:17.691 "state": "online", 00:26:17.691 "raid_level": "raid1", 00:26:17.691 "superblock": true, 00:26:17.691 "num_base_bdevs": 2, 00:26:17.691 "num_base_bdevs_discovered": 2, 00:26:17.691 "num_base_bdevs_operational": 2, 00:26:17.691 "base_bdevs_list": [ 00:26:17.691 { 00:26:17.691 "name": "pt1", 00:26:17.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.691 "is_configured": true, 00:26:17.691 "data_offset": 256, 00:26:17.691 "data_size": 7936 00:26:17.691 }, 00:26:17.691 { 00:26:17.691 "name": "pt2", 00:26:17.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.691 "is_configured": true, 00:26:17.691 "data_offset": 256, 00:26:17.691 "data_size": 7936 00:26:17.691 } 00:26:17.691 ] 00:26:17.691 } 00:26:17.691 } 00:26:17.691 }' 00:26:17.691 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.691 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:17.691 pt2' 00:26:17.691 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.691 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:17.691 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.949 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.949 "name": "pt1", 00:26:17.949 "aliases": [ 00:26:17.949 "00000000-0000-0000-0000-000000000001" 00:26:17.949 ], 00:26:17.949 "product_name": "passthru", 00:26:17.949 "block_size": 4128, 00:26:17.949 "num_blocks": 8192, 00:26:17.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.949 "md_size": 32, 00:26:17.949 "md_interleave": true, 00:26:17.949 "dif_type": 0, 00:26:17.949 "assigned_rate_limits": { 00:26:17.949 "rw_ios_per_sec": 0, 00:26:17.949 "rw_mbytes_per_sec": 0, 00:26:17.949 "r_mbytes_per_sec": 0, 00:26:17.949 "w_mbytes_per_sec": 0 00:26:17.949 }, 00:26:17.949 "claimed": true, 00:26:17.949 "claim_type": "exclusive_write", 00:26:17.949 "zoned": false, 00:26:17.949 "supported_io_types": { 00:26:17.949 "read": true, 00:26:17.949 "write": true, 00:26:17.949 "unmap": true, 00:26:17.949 "flush": true, 00:26:17.949 "reset": true, 00:26:17.949 "nvme_admin": false, 00:26:17.949 "nvme_io": false, 00:26:17.949 "nvme_io_md": false, 00:26:17.949 "write_zeroes": true, 00:26:17.949 "zcopy": true, 00:26:17.949 "get_zone_info": false, 00:26:17.949 "zone_management": false, 00:26:17.949 "zone_append": false, 00:26:17.949 "compare": false, 00:26:17.949 "compare_and_write": false, 00:26:17.949 "abort": true, 00:26:17.949 "seek_hole": false, 00:26:17.949 "seek_data": false, 00:26:17.949 "copy": true, 00:26:17.949 "nvme_iov_md": false 00:26:17.949 }, 00:26:17.949 "memory_domains": [ 00:26:17.949 { 00:26:17.949 "dma_device_id": "system", 00:26:17.949 "dma_device_type": 1 00:26:17.949 }, 00:26:17.949 { 00:26:17.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.949 "dma_device_type": 2 00:26:17.949 } 00:26:17.949 ], 00:26:17.949 "driver_specific": { 00:26:17.949 "passthru": { 00:26:17.949 "name": "pt1", 00:26:17.949 "base_bdev_name": "malloc1" 00:26:17.949 } 00:26:17.949 } 00:26:17.949 }' 00:26:17.949 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.949 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:18.210 06:21:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.478 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.478 "name": "pt2", 00:26:18.478 "aliases": [ 00:26:18.478 "00000000-0000-0000-0000-000000000002" 00:26:18.478 ], 00:26:18.478 "product_name": "passthru", 00:26:18.478 "block_size": 4128, 00:26:18.478 "num_blocks": 8192, 00:26:18.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:18.478 "md_size": 32, 00:26:18.478 "md_interleave": true, 00:26:18.478 "dif_type": 0, 00:26:18.478 "assigned_rate_limits": { 00:26:18.478 "rw_ios_per_sec": 0, 00:26:18.478 "rw_mbytes_per_sec": 0, 00:26:18.478 "r_mbytes_per_sec": 0, 00:26:18.478 "w_mbytes_per_sec": 0 00:26:18.478 }, 00:26:18.478 "claimed": true, 00:26:18.478 "claim_type": "exclusive_write", 00:26:18.478 "zoned": false, 00:26:18.478 "supported_io_types": { 00:26:18.478 "read": true, 00:26:18.478 "write": true, 00:26:18.478 "unmap": true, 00:26:18.478 "flush": true, 00:26:18.478 "reset": true, 00:26:18.478 "nvme_admin": false, 00:26:18.478 "nvme_io": false, 00:26:18.478 "nvme_io_md": false, 00:26:18.478 "write_zeroes": true, 00:26:18.478 "zcopy": true, 00:26:18.478 "get_zone_info": false, 00:26:18.478 "zone_management": false, 00:26:18.478 "zone_append": false, 00:26:18.478 "compare": false, 00:26:18.478 "compare_and_write": false, 00:26:18.478 "abort": true, 00:26:18.478 "seek_hole": false, 00:26:18.478 "seek_data": false, 00:26:18.478 "copy": true, 00:26:18.478 "nvme_iov_md": false 00:26:18.478 }, 00:26:18.478 "memory_domains": [ 00:26:18.478 { 00:26:18.478 "dma_device_id": "system", 00:26:18.478 "dma_device_type": 1 00:26:18.478 }, 00:26:18.478 { 00:26:18.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.478 "dma_device_type": 2 00:26:18.478 } 00:26:18.478 ], 00:26:18.478 "driver_specific": { 00:26:18.478 "passthru": { 00:26:18.478 "name": "pt2", 00:26:18.478 "base_bdev_name": "malloc2" 00:26:18.478 } 00:26:18.478 } 00:26:18.478 }' 00:26:18.478 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.478 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.478 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:18.478 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.764 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.764 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:18.764 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.764 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.764 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:18.764 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.765 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.765 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:18.765 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:18.765 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:19.039 [2024-08-13 06:21:20.721485] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:19.039 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=8af2d4b1-baf3-4b00-a488-467028bd1748 00:26:19.039 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 8af2d4b1-baf3-4b00-a488-467028bd1748 ']' 00:26:19.039 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:19.298 [2024-08-13 06:21:20.896976] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.298 [2024-08-13 06:21:20.897006] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.298 [2024-08-13 06:21:20.897079] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.298 [2024-08-13 06:21:20.897126] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:19.298 [2024-08-13 06:21:20.897144] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:26:19.298 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:19.298 06:21:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.557 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:19.557 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:19.557 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:19.557 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:19.557 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:19.557 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:19.816 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:19.816 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@646 -- # local es=0 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:20.075 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:20.334 [2024-08-13 06:21:21.931173] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:20.334 [2024-08-13 06:21:21.932822] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:20.334 [2024-08-13 06:21:21.932875] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:20.334 [2024-08-13 06:21:21.932911] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:20.334 [2024-08-13 06:21:21.932924] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:20.334 [2024-08-13 06:21:21.932934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:26:20.334 request: 00:26:20.334 { 00:26:20.335 "name": "raid_bdev1", 00:26:20.335 "raid_level": "raid1", 00:26:20.335 "base_bdevs": [ 00:26:20.335 "malloc1", 00:26:20.335 "malloc2" 00:26:20.335 ], 00:26:20.335 "superblock": false, 00:26:20.335 "method": "bdev_raid_create", 00:26:20.335 "req_id": 1 00:26:20.335 } 00:26:20.335 Got JSON-RPC error response 00:26:20.335 response: 00:26:20.335 { 00:26:20.335 "code": -17, 00:26:20.335 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:20.335 } 00:26:20.335 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # es=1 00:26:20.335 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:26:20.335 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:26:20.335 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:26:20.335 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:20.335 06:21:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:20.594 [2024-08-13 06:21:22.334444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:20.594 [2024-08-13 06:21:22.334490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.594 [2024-08-13 06:21:22.334503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:20.594 [2024-08-13 06:21:22.334512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.594 [2024-08-13 06:21:22.336188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.594 [2024-08-13 06:21:22.336224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:20.594 [2024-08-13 06:21:22.336262] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:20.594 [2024-08-13 06:21:22.336291] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:20.594 pt1 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.594 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.853 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.853 "name": "raid_bdev1", 00:26:20.853 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:20.853 "strip_size_kb": 0, 00:26:20.853 "state": "configuring", 00:26:20.853 "raid_level": "raid1", 00:26:20.853 "superblock": true, 00:26:20.853 "num_base_bdevs": 2, 00:26:20.853 "num_base_bdevs_discovered": 1, 00:26:20.853 "num_base_bdevs_operational": 2, 00:26:20.853 "base_bdevs_list": [ 00:26:20.853 { 00:26:20.853 "name": "pt1", 00:26:20.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:20.853 "is_configured": true, 00:26:20.853 "data_offset": 256, 00:26:20.853 "data_size": 7936 00:26:20.853 }, 00:26:20.853 { 00:26:20.853 "name": null, 00:26:20.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:20.853 "is_configured": false, 00:26:20.853 "data_offset": 256, 00:26:20.853 "data_size": 7936 00:26:20.853 } 00:26:20.853 ] 00:26:20.853 }' 00:26:20.853 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.853 06:21:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:21.421 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:26:21.421 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:21.421 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:21.421 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:21.680 [2024-08-13 06:21:23.249063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:21.680 [2024-08-13 06:21:23.249103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.680 [2024-08-13 06:21:23.249117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:21.681 [2024-08-13 06:21:23.249126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.681 [2024-08-13 06:21:23.249223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.681 [2024-08-13 06:21:23.249241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:21.681 [2024-08-13 06:21:23.249273] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:21.681 [2024-08-13 06:21:23.249295] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:21.681 [2024-08-13 06:21:23.249358] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:26:21.681 [2024-08-13 06:21:23.249369] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:21.681 [2024-08-13 06:21:23.249426] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:21.681 [2024-08-13 06:21:23.249475] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:26:21.681 [2024-08-13 06:21:23.249483] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:26:21.681 [2024-08-13 06:21:23.249525] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.681 pt2 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.681 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.940 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.940 "name": "raid_bdev1", 00:26:21.940 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:21.940 "strip_size_kb": 0, 00:26:21.940 "state": "online", 00:26:21.940 "raid_level": "raid1", 00:26:21.940 "superblock": true, 00:26:21.940 "num_base_bdevs": 2, 00:26:21.940 "num_base_bdevs_discovered": 2, 00:26:21.940 "num_base_bdevs_operational": 2, 00:26:21.940 "base_bdevs_list": [ 00:26:21.940 { 00:26:21.940 "name": "pt1", 00:26:21.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:21.940 "is_configured": true, 00:26:21.940 "data_offset": 256, 00:26:21.940 "data_size": 7936 00:26:21.940 }, 00:26:21.940 { 00:26:21.940 "name": "pt2", 00:26:21.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:21.940 "is_configured": true, 00:26:21.940 "data_offset": 256, 00:26:21.940 "data_size": 7936 00:26:21.940 } 00:26:21.940 ] 00:26:21.940 }' 00:26:21.940 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.940 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:22.199 06:21:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:22.458 [2024-08-13 06:21:24.171682] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.458 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:22.458 "name": "raid_bdev1", 00:26:22.458 "aliases": [ 00:26:22.458 "8af2d4b1-baf3-4b00-a488-467028bd1748" 00:26:22.458 ], 00:26:22.458 "product_name": "Raid Volume", 00:26:22.458 "block_size": 4128, 00:26:22.458 "num_blocks": 7936, 00:26:22.458 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:22.458 "md_size": 32, 00:26:22.458 "md_interleave": true, 00:26:22.458 "dif_type": 0, 00:26:22.458 "assigned_rate_limits": { 00:26:22.458 "rw_ios_per_sec": 0, 00:26:22.458 "rw_mbytes_per_sec": 0, 00:26:22.458 "r_mbytes_per_sec": 0, 00:26:22.458 "w_mbytes_per_sec": 0 00:26:22.458 }, 00:26:22.458 "claimed": false, 00:26:22.458 "zoned": false, 00:26:22.458 "supported_io_types": { 00:26:22.458 "read": true, 00:26:22.458 "write": true, 00:26:22.458 "unmap": false, 00:26:22.458 "flush": false, 00:26:22.458 "reset": true, 00:26:22.458 "nvme_admin": false, 00:26:22.458 "nvme_io": false, 00:26:22.458 "nvme_io_md": false, 00:26:22.458 "write_zeroes": true, 00:26:22.458 "zcopy": false, 00:26:22.458 "get_zone_info": false, 00:26:22.458 "zone_management": false, 00:26:22.458 "zone_append": false, 00:26:22.458 "compare": false, 00:26:22.458 "compare_and_write": false, 00:26:22.458 "abort": false, 00:26:22.458 "seek_hole": false, 00:26:22.458 "seek_data": false, 00:26:22.458 "copy": false, 00:26:22.458 "nvme_iov_md": false 00:26:22.458 }, 00:26:22.458 "memory_domains": [ 00:26:22.458 { 00:26:22.458 "dma_device_id": "system", 00:26:22.458 "dma_device_type": 1 00:26:22.458 }, 00:26:22.458 { 00:26:22.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.458 "dma_device_type": 2 00:26:22.458 }, 00:26:22.458 { 00:26:22.458 "dma_device_id": "system", 00:26:22.458 "dma_device_type": 1 00:26:22.458 }, 00:26:22.458 { 00:26:22.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.458 "dma_device_type": 2 00:26:22.458 } 00:26:22.458 ], 00:26:22.458 "driver_specific": { 00:26:22.458 "raid": { 00:26:22.458 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:22.458 "strip_size_kb": 0, 00:26:22.458 "state": "online", 00:26:22.458 "raid_level": "raid1", 00:26:22.458 "superblock": true, 00:26:22.458 "num_base_bdevs": 2, 00:26:22.458 "num_base_bdevs_discovered": 2, 00:26:22.458 "num_base_bdevs_operational": 2, 00:26:22.458 "base_bdevs_list": [ 00:26:22.458 { 00:26:22.458 "name": "pt1", 00:26:22.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.458 "is_configured": true, 00:26:22.458 "data_offset": 256, 00:26:22.458 "data_size": 7936 00:26:22.458 }, 00:26:22.458 { 00:26:22.458 "name": "pt2", 00:26:22.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:22.458 "is_configured": true, 00:26:22.458 "data_offset": 256, 00:26:22.458 "data_size": 7936 00:26:22.458 } 00:26:22.459 ] 00:26:22.459 } 00:26:22.459 } 00:26:22.459 }' 00:26:22.459 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:22.459 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:22.459 pt2' 00:26:22.459 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:22.459 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:22.459 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:22.718 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:22.718 "name": "pt1", 00:26:22.718 "aliases": [ 00:26:22.718 "00000000-0000-0000-0000-000000000001" 00:26:22.718 ], 00:26:22.718 "product_name": "passthru", 00:26:22.718 "block_size": 4128, 00:26:22.718 "num_blocks": 8192, 00:26:22.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.718 "md_size": 32, 00:26:22.718 "md_interleave": true, 00:26:22.718 "dif_type": 0, 00:26:22.718 "assigned_rate_limits": { 00:26:22.718 "rw_ios_per_sec": 0, 00:26:22.718 "rw_mbytes_per_sec": 0, 00:26:22.718 "r_mbytes_per_sec": 0, 00:26:22.718 "w_mbytes_per_sec": 0 00:26:22.718 }, 00:26:22.718 "claimed": true, 00:26:22.718 "claim_type": "exclusive_write", 00:26:22.718 "zoned": false, 00:26:22.718 "supported_io_types": { 00:26:22.718 "read": true, 00:26:22.718 "write": true, 00:26:22.718 "unmap": true, 00:26:22.718 "flush": true, 00:26:22.718 "reset": true, 00:26:22.718 "nvme_admin": false, 00:26:22.718 "nvme_io": false, 00:26:22.718 "nvme_io_md": false, 00:26:22.718 "write_zeroes": true, 00:26:22.718 "zcopy": true, 00:26:22.718 "get_zone_info": false, 00:26:22.718 "zone_management": false, 00:26:22.718 "zone_append": false, 00:26:22.718 "compare": false, 00:26:22.718 "compare_and_write": false, 00:26:22.718 "abort": true, 00:26:22.718 "seek_hole": false, 00:26:22.718 "seek_data": false, 00:26:22.718 "copy": true, 00:26:22.718 "nvme_iov_md": false 00:26:22.718 }, 00:26:22.718 "memory_domains": [ 00:26:22.718 { 00:26:22.718 "dma_device_id": "system", 00:26:22.718 "dma_device_type": 1 00:26:22.718 }, 00:26:22.718 { 00:26:22.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.718 "dma_device_type": 2 00:26:22.718 } 00:26:22.718 ], 00:26:22.718 "driver_specific": { 00:26:22.718 "passthru": { 00:26:22.718 "name": "pt1", 00:26:22.718 "base_bdev_name": "malloc1" 00:26:22.718 } 00:26:22.718 } 00:26:22.718 }' 00:26:22.718 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.718 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.718 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:22.978 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:23.237 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:23.237 "name": "pt2", 00:26:23.237 "aliases": [ 00:26:23.237 "00000000-0000-0000-0000-000000000002" 00:26:23.237 ], 00:26:23.237 "product_name": "passthru", 00:26:23.237 "block_size": 4128, 00:26:23.237 "num_blocks": 8192, 00:26:23.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.237 "md_size": 32, 00:26:23.237 "md_interleave": true, 00:26:23.237 "dif_type": 0, 00:26:23.237 "assigned_rate_limits": { 00:26:23.237 "rw_ios_per_sec": 0, 00:26:23.237 "rw_mbytes_per_sec": 0, 00:26:23.237 "r_mbytes_per_sec": 0, 00:26:23.237 "w_mbytes_per_sec": 0 00:26:23.237 }, 00:26:23.237 "claimed": true, 00:26:23.237 "claim_type": "exclusive_write", 00:26:23.237 "zoned": false, 00:26:23.237 "supported_io_types": { 00:26:23.237 "read": true, 00:26:23.237 "write": true, 00:26:23.237 "unmap": true, 00:26:23.237 "flush": true, 00:26:23.237 "reset": true, 00:26:23.237 "nvme_admin": false, 00:26:23.237 "nvme_io": false, 00:26:23.237 "nvme_io_md": false, 00:26:23.237 "write_zeroes": true, 00:26:23.237 "zcopy": true, 00:26:23.237 "get_zone_info": false, 00:26:23.237 "zone_management": false, 00:26:23.237 "zone_append": false, 00:26:23.237 "compare": false, 00:26:23.237 "compare_and_write": false, 00:26:23.237 "abort": true, 00:26:23.237 "seek_hole": false, 00:26:23.237 "seek_data": false, 00:26:23.237 "copy": true, 00:26:23.237 "nvme_iov_md": false 00:26:23.237 }, 00:26:23.237 "memory_domains": [ 00:26:23.237 { 00:26:23.237 "dma_device_id": "system", 00:26:23.237 "dma_device_type": 1 00:26:23.237 }, 00:26:23.237 { 00:26:23.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.237 "dma_device_type": 2 00:26:23.237 } 00:26:23.237 ], 00:26:23.237 "driver_specific": { 00:26:23.237 "passthru": { 00:26:23.237 "name": "pt2", 00:26:23.237 "base_bdev_name": "malloc2" 00:26:23.237 } 00:26:23.237 } 00:26:23.237 }' 00:26:23.237 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.237 06:21:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.237 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:23.237 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.496 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.496 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:23.496 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.496 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.496 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:23.496 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.497 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:23.756 [2024-08-13 06:21:25.473495] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 8af2d4b1-baf3-4b00-a488-467028bd1748 '!=' 8af2d4b1-baf3-4b00-a488-467028bd1748 ']' 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:26:23.756 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:24.015 [2024-08-13 06:21:25.649006] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.015 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.274 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.274 "name": "raid_bdev1", 00:26:24.274 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:24.274 "strip_size_kb": 0, 00:26:24.274 "state": "online", 00:26:24.274 "raid_level": "raid1", 00:26:24.274 "superblock": true, 00:26:24.274 "num_base_bdevs": 2, 00:26:24.274 "num_base_bdevs_discovered": 1, 00:26:24.274 "num_base_bdevs_operational": 1, 00:26:24.274 "base_bdevs_list": [ 00:26:24.274 { 00:26:24.275 "name": null, 00:26:24.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.275 "is_configured": false, 00:26:24.275 "data_offset": 256, 00:26:24.275 "data_size": 7936 00:26:24.275 }, 00:26:24.275 { 00:26:24.275 "name": "pt2", 00:26:24.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.275 "is_configured": true, 00:26:24.275 "data_offset": 256, 00:26:24.275 "data_size": 7936 00:26:24.275 } 00:26:24.275 ] 00:26:24.275 }' 00:26:24.275 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.275 06:21:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:24.844 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:25.104 [2024-08-13 06:21:26.647237] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:25.104 [2024-08-13 06:21:26.647263] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:25.104 [2024-08-13 06:21:26.647307] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:25.104 [2024-08-13 06:21:26.647339] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:25.104 [2024-08-13 06:21:26.647349] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:25.104 06:21:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:25.363 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:25.363 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:25.363 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:25.363 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:25.363 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:26:25.363 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:25.624 [2024-08-13 06:21:27.258145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:25.624 [2024-08-13 06:21:27.258194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.624 [2024-08-13 06:21:27.258208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:25.624 [2024-08-13 06:21:27.258217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.624 [2024-08-13 06:21:27.259907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.624 [2024-08-13 06:21:27.259943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:25.624 [2024-08-13 06:21:27.259979] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:25.624 [2024-08-13 06:21:27.260006] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:25.624 [2024-08-13 06:21:27.260066] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:26:25.624 [2024-08-13 06:21:27.260077] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:25.624 [2024-08-13 06:21:27.260137] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:26:25.624 [2024-08-13 06:21:27.260189] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:26:25.624 [2024-08-13 06:21:27.260195] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:26:25.624 [2024-08-13 06:21:27.260236] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.624 pt2 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.624 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.882 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.883 "name": "raid_bdev1", 00:26:25.883 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:25.883 "strip_size_kb": 0, 00:26:25.883 "state": "online", 00:26:25.883 "raid_level": "raid1", 00:26:25.883 "superblock": true, 00:26:25.883 "num_base_bdevs": 2, 00:26:25.883 "num_base_bdevs_discovered": 1, 00:26:25.883 "num_base_bdevs_operational": 1, 00:26:25.883 "base_bdevs_list": [ 00:26:25.883 { 00:26:25.883 "name": null, 00:26:25.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.883 "is_configured": false, 00:26:25.883 "data_offset": 256, 00:26:25.883 "data_size": 7936 00:26:25.883 }, 00:26:25.883 { 00:26:25.883 "name": "pt2", 00:26:25.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.883 "is_configured": true, 00:26:25.883 "data_offset": 256, 00:26:25.883 "data_size": 7936 00:26:25.883 } 00:26:25.883 ] 00:26:25.883 }' 00:26:25.883 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.883 06:21:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:26.449 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:26.449 [2024-08-13 06:21:28.220603] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:26.449 [2024-08-13 06:21:28.220629] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:26.449 [2024-08-13 06:21:28.220668] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:26.449 [2024-08-13 06:21:28.220700] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:26.449 [2024-08-13 06:21:28.220708] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:26:26.707 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.707 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:26.707 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:26.707 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:26.707 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:26:26.707 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:26.966 [2024-08-13 06:21:28.615919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:26.966 [2024-08-13 06:21:28.615958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.966 [2024-08-13 06:21:28.615973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:26.966 [2024-08-13 06:21:28.615981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.966 [2024-08-13 06:21:28.617730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.966 [2024-08-13 06:21:28.617758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:26.966 [2024-08-13 06:21:28.617797] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:26.966 [2024-08-13 06:21:28.617823] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:26.966 [2024-08-13 06:21:28.617905] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:26.966 [2024-08-13 06:21:28.617922] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:26.966 [2024-08-13 06:21:28.617940] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:26:26.966 [2024-08-13 06:21:28.617964] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:26.966 [2024-08-13 06:21:28.618026] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:26:26.966 [2024-08-13 06:21:28.618051] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:26.966 [2024-08-13 06:21:28.618126] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:26.966 [2024-08-13 06:21:28.618173] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:26:26.966 [2024-08-13 06:21:28.618182] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:26:26.966 [2024-08-13 06:21:28.618229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.966 pt1 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.966 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.224 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.224 "name": "raid_bdev1", 00:26:27.224 "uuid": "8af2d4b1-baf3-4b00-a488-467028bd1748", 00:26:27.224 "strip_size_kb": 0, 00:26:27.224 "state": "online", 00:26:27.224 "raid_level": "raid1", 00:26:27.224 "superblock": true, 00:26:27.224 "num_base_bdevs": 2, 00:26:27.224 "num_base_bdevs_discovered": 1, 00:26:27.224 "num_base_bdevs_operational": 1, 00:26:27.224 "base_bdevs_list": [ 00:26:27.224 { 00:26:27.224 "name": null, 00:26:27.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.224 "is_configured": false, 00:26:27.224 "data_offset": 256, 00:26:27.224 "data_size": 7936 00:26:27.224 }, 00:26:27.224 { 00:26:27.224 "name": "pt2", 00:26:27.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:27.224 "is_configured": true, 00:26:27.224 "data_offset": 256, 00:26:27.224 "data_size": 7936 00:26:27.224 } 00:26:27.224 ] 00:26:27.224 }' 00:26:27.224 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.224 06:21:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:27.790 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:27.790 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:28.049 [2024-08-13 06:21:29.794145] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 8af2d4b1-baf3-4b00-a488-467028bd1748 '!=' 8af2d4b1-baf3-4b00-a488-467028bd1748 ']' 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 108755 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 108755 ']' 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 108755 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:28.049 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108755 00:26:28.308 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:28.308 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:28.308 killing process with pid 108755 00:26:28.308 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108755' 00:26:28.308 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 108755 00:26:28.308 [2024-08-13 06:21:29.867512] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:28.308 [2024-08-13 06:21:29.867584] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:28.308 [2024-08-13 06:21:29.867614] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:28.308 [2024-08-13 06:21:29.867624] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:26:28.308 06:21:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 108755 00:26:28.308 [2024-08-13 06:21:29.890679] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:28.568 06:21:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:26:28.568 00:26:28.568 real 0m13.640s 00:26:28.568 user 0m24.715s 00:26:28.568 sys 0m2.346s 00:26:28.568 06:21:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:28.568 06:21:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:28.568 ************************************ 00:26:28.568 END TEST raid_superblock_test_md_interleaved 00:26:28.568 ************************************ 00:26:28.568 06:21:30 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:26:28.568 06:21:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:26:28.568 06:21:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:28.568 06:21:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:28.568 ************************************ 00:26:28.568 START TEST raid_rebuild_test_sb_md_interleaved 00:26:28.568 ************************************ 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=109234 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 109234 /var/tmp/spdk-raid.sock 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 109234 ']' 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.568 06:21:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:28.568 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:28.568 Zero copy mechanism will not be used. 00:26:28.568 [2024-08-13 06:21:30.314133] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:26:28.568 [2024-08-13 06:21:30.314319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109234 ] 00:26:28.827 [2024-08-13 06:21:30.462465] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.827 [2024-08-13 06:21:30.509356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.827 [2024-08-13 06:21:30.552491] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:28.827 [2024-08-13 06:21:30.552538] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:29.394 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:29.394 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:26:29.394 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:29.394 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:26:29.653 BaseBdev1_malloc 00:26:29.653 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:29.911 [2024-08-13 06:21:31.504523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:29.911 [2024-08-13 06:21:31.504584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.911 [2024-08-13 06:21:31.504612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:29.911 [2024-08-13 06:21:31.504626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.911 [2024-08-13 06:21:31.506372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.911 [2024-08-13 06:21:31.506412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:29.911 BaseBdev1 00:26:29.911 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:29.911 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:26:30.170 BaseBdev2_malloc 00:26:30.170 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:30.170 [2024-08-13 06:21:31.930887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:30.170 [2024-08-13 06:21:31.930940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.170 [2024-08-13 06:21:31.930959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:30.170 [2024-08-13 06:21:31.930969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.170 [2024-08-13 06:21:31.932636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.170 [2024-08-13 06:21:31.932669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:30.170 BaseBdev2 00:26:30.428 06:21:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:26:30.428 spare_malloc 00:26:30.428 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:30.686 spare_delay 00:26:30.686 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:30.944 [2024-08-13 06:21:32.498483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:30.944 [2024-08-13 06:21:32.498540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.944 [2024-08-13 06:21:32.498560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:30.944 [2024-08-13 06:21:32.498572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.944 [2024-08-13 06:21:32.500314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.944 [2024-08-13 06:21:32.500350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:30.944 spare 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:30.944 [2024-08-13 06:21:32.694222] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:30.944 [2024-08-13 06:21:32.695865] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:30.944 [2024-08-13 06:21:32.696019] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:26:30.944 [2024-08-13 06:21:32.696046] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:30.944 [2024-08-13 06:21:32.696126] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:30.944 [2024-08-13 06:21:32.696193] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:26:30.944 [2024-08-13 06:21:32.696202] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:26:30.944 [2024-08-13 06:21:32.696271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.944 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.203 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.203 "name": "raid_bdev1", 00:26:31.203 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:31.203 "strip_size_kb": 0, 00:26:31.203 "state": "online", 00:26:31.203 "raid_level": "raid1", 00:26:31.203 "superblock": true, 00:26:31.203 "num_base_bdevs": 2, 00:26:31.203 "num_base_bdevs_discovered": 2, 00:26:31.203 "num_base_bdevs_operational": 2, 00:26:31.203 "base_bdevs_list": [ 00:26:31.203 { 00:26:31.203 "name": "BaseBdev1", 00:26:31.203 "uuid": "3a250737-e06a-5775-b4c7-604c03fe5480", 00:26:31.203 "is_configured": true, 00:26:31.203 "data_offset": 256, 00:26:31.203 "data_size": 7936 00:26:31.203 }, 00:26:31.203 { 00:26:31.203 "name": "BaseBdev2", 00:26:31.203 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:31.203 "is_configured": true, 00:26:31.203 "data_offset": 256, 00:26:31.203 "data_size": 7936 00:26:31.203 } 00:26:31.203 ] 00:26:31.203 }' 00:26:31.203 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.203 06:21:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:31.772 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:31.772 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:32.038 [2024-08-13 06:21:33.612837] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.038 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:26:32.038 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:32.038 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.301 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:26:32.301 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:32.301 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:26:32.301 06:21:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:32.301 [2024-08-13 06:21:34.011926] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.301 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.560 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.560 "name": "raid_bdev1", 00:26:32.560 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:32.560 "strip_size_kb": 0, 00:26:32.560 "state": "online", 00:26:32.560 "raid_level": "raid1", 00:26:32.560 "superblock": true, 00:26:32.560 "num_base_bdevs": 2, 00:26:32.560 "num_base_bdevs_discovered": 1, 00:26:32.560 "num_base_bdevs_operational": 1, 00:26:32.560 "base_bdevs_list": [ 00:26:32.560 { 00:26:32.560 "name": null, 00:26:32.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.560 "is_configured": false, 00:26:32.560 "data_offset": 256, 00:26:32.560 "data_size": 7936 00:26:32.560 }, 00:26:32.560 { 00:26:32.560 "name": "BaseBdev2", 00:26:32.560 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:32.560 "is_configured": true, 00:26:32.560 "data_offset": 256, 00:26:32.560 "data_size": 7936 00:26:32.560 } 00:26:32.560 ] 00:26:32.560 }' 00:26:32.560 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.560 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:33.130 06:21:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:33.389 [2024-08-13 06:21:35.002354] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:33.389 [2024-08-13 06:21:35.005154] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:26:33.389 [2024-08-13 06:21:35.006849] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:33.389 06:21:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.327 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.586 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.586 "name": "raid_bdev1", 00:26:34.586 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:34.586 "strip_size_kb": 0, 00:26:34.586 "state": "online", 00:26:34.586 "raid_level": "raid1", 00:26:34.586 "superblock": true, 00:26:34.586 "num_base_bdevs": 2, 00:26:34.586 "num_base_bdevs_discovered": 2, 00:26:34.586 "num_base_bdevs_operational": 2, 00:26:34.586 "process": { 00:26:34.586 "type": "rebuild", 00:26:34.586 "target": "spare", 00:26:34.586 "progress": { 00:26:34.586 "blocks": 3072, 00:26:34.586 "percent": 38 00:26:34.586 } 00:26:34.586 }, 00:26:34.586 "base_bdevs_list": [ 00:26:34.586 { 00:26:34.586 "name": "spare", 00:26:34.586 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:34.586 "is_configured": true, 00:26:34.586 "data_offset": 256, 00:26:34.586 "data_size": 7936 00:26:34.586 }, 00:26:34.586 { 00:26:34.586 "name": "BaseBdev2", 00:26:34.586 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:34.586 "is_configured": true, 00:26:34.586 "data_offset": 256, 00:26:34.586 "data_size": 7936 00:26:34.586 } 00:26:34.586 ] 00:26:34.586 }' 00:26:34.586 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.586 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.586 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.586 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.586 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:34.845 [2024-08-13 06:21:36.492984] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.845 [2024-08-13 06:21:36.512485] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:34.845 [2024-08-13 06:21:36.512536] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.845 [2024-08-13 06:21:36.512549] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.845 [2024-08-13 06:21:36.512557] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.846 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.105 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:35.105 "name": "raid_bdev1", 00:26:35.105 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:35.105 "strip_size_kb": 0, 00:26:35.105 "state": "online", 00:26:35.105 "raid_level": "raid1", 00:26:35.105 "superblock": true, 00:26:35.105 "num_base_bdevs": 2, 00:26:35.105 "num_base_bdevs_discovered": 1, 00:26:35.105 "num_base_bdevs_operational": 1, 00:26:35.105 "base_bdevs_list": [ 00:26:35.105 { 00:26:35.105 "name": null, 00:26:35.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.105 "is_configured": false, 00:26:35.105 "data_offset": 256, 00:26:35.105 "data_size": 7936 00:26:35.105 }, 00:26:35.105 { 00:26:35.105 "name": "BaseBdev2", 00:26:35.105 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:35.105 "is_configured": true, 00:26:35.105 "data_offset": 256, 00:26:35.105 "data_size": 7936 00:26:35.105 } 00:26:35.105 ] 00:26:35.105 }' 00:26:35.105 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:35.105 06:21:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.674 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.934 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.934 "name": "raid_bdev1", 00:26:35.934 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:35.934 "strip_size_kb": 0, 00:26:35.934 "state": "online", 00:26:35.934 "raid_level": "raid1", 00:26:35.934 "superblock": true, 00:26:35.934 "num_base_bdevs": 2, 00:26:35.934 "num_base_bdevs_discovered": 1, 00:26:35.934 "num_base_bdevs_operational": 1, 00:26:35.934 "base_bdevs_list": [ 00:26:35.934 { 00:26:35.934 "name": null, 00:26:35.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.934 "is_configured": false, 00:26:35.934 "data_offset": 256, 00:26:35.934 "data_size": 7936 00:26:35.934 }, 00:26:35.934 { 00:26:35.934 "name": "BaseBdev2", 00:26:35.934 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:35.934 "is_configured": true, 00:26:35.934 "data_offset": 256, 00:26:35.934 "data_size": 7936 00:26:35.934 } 00:26:35.934 ] 00:26:35.934 }' 00:26:35.934 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.934 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:35.934 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.934 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:35.934 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:36.193 [2024-08-13 06:21:37.813397] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:36.193 [2024-08-13 06:21:37.815374] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:36.193 [2024-08-13 06:21:37.817005] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:36.193 06:21:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.132 06:21:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.391 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.391 "name": "raid_bdev1", 00:26:37.391 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:37.391 "strip_size_kb": 0, 00:26:37.391 "state": "online", 00:26:37.391 "raid_level": "raid1", 00:26:37.391 "superblock": true, 00:26:37.391 "num_base_bdevs": 2, 00:26:37.391 "num_base_bdevs_discovered": 2, 00:26:37.391 "num_base_bdevs_operational": 2, 00:26:37.391 "process": { 00:26:37.391 "type": "rebuild", 00:26:37.391 "target": "spare", 00:26:37.391 "progress": { 00:26:37.391 "blocks": 3072, 00:26:37.391 "percent": 38 00:26:37.391 } 00:26:37.391 }, 00:26:37.391 "base_bdevs_list": [ 00:26:37.391 { 00:26:37.391 "name": "spare", 00:26:37.391 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:37.391 "is_configured": true, 00:26:37.391 "data_offset": 256, 00:26:37.391 "data_size": 7936 00:26:37.391 }, 00:26:37.391 { 00:26:37.391 "name": "BaseBdev2", 00:26:37.391 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:37.391 "is_configured": true, 00:26:37.391 "data_offset": 256, 00:26:37.392 "data_size": 7936 00:26:37.392 } 00:26:37.392 ] 00:26:37.392 }' 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:26:37.392 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1251 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.392 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.651 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.651 "name": "raid_bdev1", 00:26:37.651 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:37.651 "strip_size_kb": 0, 00:26:37.652 "state": "online", 00:26:37.652 "raid_level": "raid1", 00:26:37.652 "superblock": true, 00:26:37.652 "num_base_bdevs": 2, 00:26:37.652 "num_base_bdevs_discovered": 2, 00:26:37.652 "num_base_bdevs_operational": 2, 00:26:37.652 "process": { 00:26:37.652 "type": "rebuild", 00:26:37.652 "target": "spare", 00:26:37.652 "progress": { 00:26:37.652 "blocks": 3584, 00:26:37.652 "percent": 45 00:26:37.652 } 00:26:37.652 }, 00:26:37.652 "base_bdevs_list": [ 00:26:37.652 { 00:26:37.652 "name": "spare", 00:26:37.652 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:37.652 "is_configured": true, 00:26:37.652 "data_offset": 256, 00:26:37.652 "data_size": 7936 00:26:37.652 }, 00:26:37.652 { 00:26:37.652 "name": "BaseBdev2", 00:26:37.652 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:37.652 "is_configured": true, 00:26:37.652 "data_offset": 256, 00:26:37.652 "data_size": 7936 00:26:37.652 } 00:26:37.652 ] 00:26:37.652 }' 00:26:37.652 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.652 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.652 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.652 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.652 06:21:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:39.031 "name": "raid_bdev1", 00:26:39.031 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:39.031 "strip_size_kb": 0, 00:26:39.031 "state": "online", 00:26:39.031 "raid_level": "raid1", 00:26:39.031 "superblock": true, 00:26:39.031 "num_base_bdevs": 2, 00:26:39.031 "num_base_bdevs_discovered": 2, 00:26:39.031 "num_base_bdevs_operational": 2, 00:26:39.031 "process": { 00:26:39.031 "type": "rebuild", 00:26:39.031 "target": "spare", 00:26:39.031 "progress": { 00:26:39.031 "blocks": 6912, 00:26:39.031 "percent": 87 00:26:39.031 } 00:26:39.031 }, 00:26:39.031 "base_bdevs_list": [ 00:26:39.031 { 00:26:39.031 "name": "spare", 00:26:39.031 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:39.031 "is_configured": true, 00:26:39.031 "data_offset": 256, 00:26:39.031 "data_size": 7936 00:26:39.031 }, 00:26:39.031 { 00:26:39.031 "name": "BaseBdev2", 00:26:39.031 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:39.031 "is_configured": true, 00:26:39.031 "data_offset": 256, 00:26:39.031 "data_size": 7936 00:26:39.031 } 00:26:39.031 ] 00:26:39.031 }' 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:39.031 06:21:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:39.290 [2024-08-13 06:21:40.926756] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:39.290 [2024-08-13 06:21:40.926839] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:39.290 [2024-08-13 06:21:40.926937] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:40.236 "name": "raid_bdev1", 00:26:40.236 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:40.236 "strip_size_kb": 0, 00:26:40.236 "state": "online", 00:26:40.236 "raid_level": "raid1", 00:26:40.236 "superblock": true, 00:26:40.236 "num_base_bdevs": 2, 00:26:40.236 "num_base_bdevs_discovered": 2, 00:26:40.236 "num_base_bdevs_operational": 2, 00:26:40.236 "base_bdevs_list": [ 00:26:40.236 { 00:26:40.236 "name": "spare", 00:26:40.236 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:40.236 "is_configured": true, 00:26:40.236 "data_offset": 256, 00:26:40.236 "data_size": 7936 00:26:40.236 }, 00:26:40.236 { 00:26:40.236 "name": "BaseBdev2", 00:26:40.236 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:40.236 "is_configured": true, 00:26:40.236 "data_offset": 256, 00:26:40.236 "data_size": 7936 00:26:40.236 } 00:26:40.236 ] 00:26:40.236 }' 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:40.236 06:21:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.236 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.516 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:40.516 "name": "raid_bdev1", 00:26:40.516 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:40.516 "strip_size_kb": 0, 00:26:40.516 "state": "online", 00:26:40.516 "raid_level": "raid1", 00:26:40.516 "superblock": true, 00:26:40.516 "num_base_bdevs": 2, 00:26:40.516 "num_base_bdevs_discovered": 2, 00:26:40.516 "num_base_bdevs_operational": 2, 00:26:40.516 "base_bdevs_list": [ 00:26:40.516 { 00:26:40.516 "name": "spare", 00:26:40.516 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:40.516 "is_configured": true, 00:26:40.516 "data_offset": 256, 00:26:40.516 "data_size": 7936 00:26:40.516 }, 00:26:40.516 { 00:26:40.516 "name": "BaseBdev2", 00:26:40.516 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:40.516 "is_configured": true, 00:26:40.516 "data_offset": 256, 00:26:40.516 "data_size": 7936 00:26:40.516 } 00:26:40.516 ] 00:26:40.516 }' 00:26:40.516 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.516 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:40.516 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.780 "name": "raid_bdev1", 00:26:40.780 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:40.780 "strip_size_kb": 0, 00:26:40.780 "state": "online", 00:26:40.780 "raid_level": "raid1", 00:26:40.780 "superblock": true, 00:26:40.780 "num_base_bdevs": 2, 00:26:40.780 "num_base_bdevs_discovered": 2, 00:26:40.780 "num_base_bdevs_operational": 2, 00:26:40.780 "base_bdevs_list": [ 00:26:40.780 { 00:26:40.780 "name": "spare", 00:26:40.780 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:40.780 "is_configured": true, 00:26:40.780 "data_offset": 256, 00:26:40.780 "data_size": 7936 00:26:40.780 }, 00:26:40.780 { 00:26:40.780 "name": "BaseBdev2", 00:26:40.780 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:40.780 "is_configured": true, 00:26:40.780 "data_offset": 256, 00:26:40.780 "data_size": 7936 00:26:40.780 } 00:26:40.780 ] 00:26:40.780 }' 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.780 06:21:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:41.348 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:41.607 [2024-08-13 06:21:43.197915] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:41.607 [2024-08-13 06:21:43.198023] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:41.607 [2024-08-13 06:21:43.198126] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:41.607 [2024-08-13 06:21:43.198206] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:41.607 [2024-08-13 06:21:43.198304] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:26:41.607 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:26:41.607 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.866 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:41.866 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:26:41.866 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:41.866 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:41.866 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:42.125 [2024-08-13 06:21:43.816822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:42.125 [2024-08-13 06:21:43.816920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.125 [2024-08-13 06:21:43.816960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:42.125 [2024-08-13 06:21:43.816987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.125 [2024-08-13 06:21:43.818754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.125 [2024-08-13 06:21:43.818827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:42.125 [2024-08-13 06:21:43.818896] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:42.125 [2024-08-13 06:21:43.818952] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:42.125 [2024-08-13 06:21:43.819094] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.125 spare 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.125 06:21:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.384 [2024-08-13 06:21:43.919014] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:26:42.384 [2024-08-13 06:21:43.919102] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:42.384 [2024-08-13 06:21:43.919211] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:26:42.384 [2024-08-13 06:21:43.919320] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:26:42.384 [2024-08-13 06:21:43.919356] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:26:42.384 [2024-08-13 06:21:43.919452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.384 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.384 "name": "raid_bdev1", 00:26:42.384 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:42.384 "strip_size_kb": 0, 00:26:42.384 "state": "online", 00:26:42.384 "raid_level": "raid1", 00:26:42.384 "superblock": true, 00:26:42.384 "num_base_bdevs": 2, 00:26:42.384 "num_base_bdevs_discovered": 2, 00:26:42.384 "num_base_bdevs_operational": 2, 00:26:42.384 "base_bdevs_list": [ 00:26:42.384 { 00:26:42.384 "name": "spare", 00:26:42.384 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:42.384 "is_configured": true, 00:26:42.384 "data_offset": 256, 00:26:42.384 "data_size": 7936 00:26:42.384 }, 00:26:42.384 { 00:26:42.384 "name": "BaseBdev2", 00:26:42.384 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:42.384 "is_configured": true, 00:26:42.384 "data_offset": 256, 00:26:42.384 "data_size": 7936 00:26:42.384 } 00:26:42.384 ] 00:26:42.384 }' 00:26:42.384 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.384 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.951 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:43.210 "name": "raid_bdev1", 00:26:43.210 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:43.210 "strip_size_kb": 0, 00:26:43.210 "state": "online", 00:26:43.210 "raid_level": "raid1", 00:26:43.210 "superblock": true, 00:26:43.210 "num_base_bdevs": 2, 00:26:43.210 "num_base_bdevs_discovered": 2, 00:26:43.210 "num_base_bdevs_operational": 2, 00:26:43.210 "base_bdevs_list": [ 00:26:43.210 { 00:26:43.210 "name": "spare", 00:26:43.210 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:43.210 "is_configured": true, 00:26:43.210 "data_offset": 256, 00:26:43.210 "data_size": 7936 00:26:43.210 }, 00:26:43.210 { 00:26:43.210 "name": "BaseBdev2", 00:26:43.210 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:43.210 "is_configured": true, 00:26:43.210 "data_offset": 256, 00:26:43.210 "data_size": 7936 00:26:43.210 } 00:26:43.210 ] 00:26:43.210 }' 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.210 06:21:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:43.469 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.469 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:43.728 [2024-08-13 06:21:45.329731] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.728 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.988 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.988 "name": "raid_bdev1", 00:26:43.988 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:43.988 "strip_size_kb": 0, 00:26:43.988 "state": "online", 00:26:43.988 "raid_level": "raid1", 00:26:43.988 "superblock": true, 00:26:43.988 "num_base_bdevs": 2, 00:26:43.988 "num_base_bdevs_discovered": 1, 00:26:43.988 "num_base_bdevs_operational": 1, 00:26:43.988 "base_bdevs_list": [ 00:26:43.988 { 00:26:43.988 "name": null, 00:26:43.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.988 "is_configured": false, 00:26:43.988 "data_offset": 256, 00:26:43.988 "data_size": 7936 00:26:43.988 }, 00:26:43.988 { 00:26:43.988 "name": "BaseBdev2", 00:26:43.988 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:43.988 "is_configured": true, 00:26:43.988 "data_offset": 256, 00:26:43.988 "data_size": 7936 00:26:43.988 } 00:26:43.988 ] 00:26:43.988 }' 00:26:43.988 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.988 06:21:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:44.556 06:21:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:44.556 [2024-08-13 06:21:46.288084] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:44.556 [2024-08-13 06:21:46.288240] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:44.556 [2024-08-13 06:21:46.288306] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:44.556 [2024-08-13 06:21:46.288353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:44.556 [2024-08-13 06:21:46.291013] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:26:44.556 [2024-08-13 06:21:46.292707] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:44.556 06:21:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:45.933 "name": "raid_bdev1", 00:26:45.933 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:45.933 "strip_size_kb": 0, 00:26:45.933 "state": "online", 00:26:45.933 "raid_level": "raid1", 00:26:45.933 "superblock": true, 00:26:45.933 "num_base_bdevs": 2, 00:26:45.933 "num_base_bdevs_discovered": 2, 00:26:45.933 "num_base_bdevs_operational": 2, 00:26:45.933 "process": { 00:26:45.933 "type": "rebuild", 00:26:45.933 "target": "spare", 00:26:45.933 "progress": { 00:26:45.933 "blocks": 3072, 00:26:45.933 "percent": 38 00:26:45.933 } 00:26:45.933 }, 00:26:45.933 "base_bdevs_list": [ 00:26:45.933 { 00:26:45.933 "name": "spare", 00:26:45.933 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:45.933 "is_configured": true, 00:26:45.933 "data_offset": 256, 00:26:45.933 "data_size": 7936 00:26:45.933 }, 00:26:45.933 { 00:26:45.933 "name": "BaseBdev2", 00:26:45.933 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:45.933 "is_configured": true, 00:26:45.933 "data_offset": 256, 00:26:45.933 "data_size": 7936 00:26:45.933 } 00:26:45.933 ] 00:26:45.933 }' 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:45.933 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:45.934 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:45.934 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:45.934 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:46.192 [2024-08-13 06:21:47.816604] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:46.192 [2024-08-13 06:21:47.897742] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:46.193 [2024-08-13 06:21:47.897810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.193 [2024-08-13 06:21:47.897826] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:46.193 [2024-08-13 06:21:47.897834] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.193 06:21:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.452 06:21:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.452 "name": "raid_bdev1", 00:26:46.452 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:46.452 "strip_size_kb": 0, 00:26:46.452 "state": "online", 00:26:46.452 "raid_level": "raid1", 00:26:46.452 "superblock": true, 00:26:46.452 "num_base_bdevs": 2, 00:26:46.452 "num_base_bdevs_discovered": 1, 00:26:46.452 "num_base_bdevs_operational": 1, 00:26:46.452 "base_bdevs_list": [ 00:26:46.452 { 00:26:46.452 "name": null, 00:26:46.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.452 "is_configured": false, 00:26:46.452 "data_offset": 256, 00:26:46.452 "data_size": 7936 00:26:46.452 }, 00:26:46.452 { 00:26:46.452 "name": "BaseBdev2", 00:26:46.452 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:46.452 "is_configured": true, 00:26:46.452 "data_offset": 256, 00:26:46.452 "data_size": 7936 00:26:46.452 } 00:26:46.452 ] 00:26:46.452 }' 00:26:46.452 06:21:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.452 06:21:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:47.021 06:21:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:47.281 [2024-08-13 06:21:48.847068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:47.281 [2024-08-13 06:21:48.847197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.281 [2024-08-13 06:21:48.847237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:47.281 [2024-08-13 06:21:48.847265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.281 [2024-08-13 06:21:48.847425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.281 [2024-08-13 06:21:48.847475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:47.281 [2024-08-13 06:21:48.847556] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:47.281 [2024-08-13 06:21:48.847592] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:47.281 [2024-08-13 06:21:48.847641] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:47.281 [2024-08-13 06:21:48.847694] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:47.281 [2024-08-13 06:21:48.849380] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:26:47.281 [2024-08-13 06:21:48.851106] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:47.281 spare 00:26:47.281 06:21:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.220 06:21:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.480 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.480 "name": "raid_bdev1", 00:26:48.480 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:48.480 "strip_size_kb": 0, 00:26:48.480 "state": "online", 00:26:48.480 "raid_level": "raid1", 00:26:48.480 "superblock": true, 00:26:48.480 "num_base_bdevs": 2, 00:26:48.480 "num_base_bdevs_discovered": 2, 00:26:48.480 "num_base_bdevs_operational": 2, 00:26:48.480 "process": { 00:26:48.480 "type": "rebuild", 00:26:48.480 "target": "spare", 00:26:48.480 "progress": { 00:26:48.480 "blocks": 3072, 00:26:48.480 "percent": 38 00:26:48.480 } 00:26:48.480 }, 00:26:48.480 "base_bdevs_list": [ 00:26:48.480 { 00:26:48.480 "name": "spare", 00:26:48.480 "uuid": "0c4b57c9-cd27-5201-9de6-fc30a7e0c73a", 00:26:48.480 "is_configured": true, 00:26:48.480 "data_offset": 256, 00:26:48.480 "data_size": 7936 00:26:48.480 }, 00:26:48.480 { 00:26:48.480 "name": "BaseBdev2", 00:26:48.480 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:48.480 "is_configured": true, 00:26:48.480 "data_offset": 256, 00:26:48.480 "data_size": 7936 00:26:48.480 } 00:26:48.480 ] 00:26:48.480 }' 00:26:48.480 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.480 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.480 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.480 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.480 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:48.739 [2024-08-13 06:21:50.357547] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:48.739 [2024-08-13 06:21:50.455961] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:48.739 [2024-08-13 06:21:50.456013] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.739 [2024-08-13 06:21:50.456039] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:48.739 [2024-08-13 06:21:50.456047] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.739 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.999 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.999 "name": "raid_bdev1", 00:26:48.999 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:48.999 "strip_size_kb": 0, 00:26:48.999 "state": "online", 00:26:48.999 "raid_level": "raid1", 00:26:48.999 "superblock": true, 00:26:48.999 "num_base_bdevs": 2, 00:26:48.999 "num_base_bdevs_discovered": 1, 00:26:48.999 "num_base_bdevs_operational": 1, 00:26:48.999 "base_bdevs_list": [ 00:26:48.999 { 00:26:48.999 "name": null, 00:26:48.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.999 "is_configured": false, 00:26:48.999 "data_offset": 256, 00:26:48.999 "data_size": 7936 00:26:48.999 }, 00:26:48.999 { 00:26:48.999 "name": "BaseBdev2", 00:26:48.999 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:48.999 "is_configured": true, 00:26:48.999 "data_offset": 256, 00:26:48.999 "data_size": 7936 00:26:48.999 } 00:26:48.999 ] 00:26:48.999 }' 00:26:48.999 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.999 06:21:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.568 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.828 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.828 "name": "raid_bdev1", 00:26:49.828 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:49.828 "strip_size_kb": 0, 00:26:49.828 "state": "online", 00:26:49.828 "raid_level": "raid1", 00:26:49.828 "superblock": true, 00:26:49.828 "num_base_bdevs": 2, 00:26:49.828 "num_base_bdevs_discovered": 1, 00:26:49.828 "num_base_bdevs_operational": 1, 00:26:49.828 "base_bdevs_list": [ 00:26:49.828 { 00:26:49.828 "name": null, 00:26:49.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.828 "is_configured": false, 00:26:49.828 "data_offset": 256, 00:26:49.828 "data_size": 7936 00:26:49.828 }, 00:26:49.828 { 00:26:49.828 "name": "BaseBdev2", 00:26:49.828 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:49.828 "is_configured": true, 00:26:49.828 "data_offset": 256, 00:26:49.828 "data_size": 7936 00:26:49.828 } 00:26:49.828 ] 00:26:49.828 }' 00:26:49.828 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.828 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:49.828 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.828 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:49.828 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:50.087 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:50.347 [2024-08-13 06:21:51.916424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:50.347 [2024-08-13 06:21:51.916477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.347 [2024-08-13 06:21:51.916497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:50.347 [2024-08-13 06:21:51.916506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.347 [2024-08-13 06:21:51.916662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.347 [2024-08-13 06:21:51.916675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:50.347 [2024-08-13 06:21:51.916717] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:50.347 [2024-08-13 06:21:51.916728] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:50.347 [2024-08-13 06:21:51.916737] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:50.347 BaseBdev1 00:26:50.347 06:21:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.287 06:21:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.547 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.547 "name": "raid_bdev1", 00:26:51.547 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:51.547 "strip_size_kb": 0, 00:26:51.547 "state": "online", 00:26:51.547 "raid_level": "raid1", 00:26:51.547 "superblock": true, 00:26:51.547 "num_base_bdevs": 2, 00:26:51.547 "num_base_bdevs_discovered": 1, 00:26:51.547 "num_base_bdevs_operational": 1, 00:26:51.547 "base_bdevs_list": [ 00:26:51.547 { 00:26:51.547 "name": null, 00:26:51.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.547 "is_configured": false, 00:26:51.547 "data_offset": 256, 00:26:51.547 "data_size": 7936 00:26:51.547 }, 00:26:51.547 { 00:26:51.547 "name": "BaseBdev2", 00:26:51.547 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:51.547 "is_configured": true, 00:26:51.547 "data_offset": 256, 00:26:51.547 "data_size": 7936 00:26:51.547 } 00:26:51.547 ] 00:26:51.547 }' 00:26:51.547 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.547 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.117 "name": "raid_bdev1", 00:26:52.117 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:52.117 "strip_size_kb": 0, 00:26:52.117 "state": "online", 00:26:52.117 "raid_level": "raid1", 00:26:52.117 "superblock": true, 00:26:52.117 "num_base_bdevs": 2, 00:26:52.117 "num_base_bdevs_discovered": 1, 00:26:52.117 "num_base_bdevs_operational": 1, 00:26:52.117 "base_bdevs_list": [ 00:26:52.117 { 00:26:52.117 "name": null, 00:26:52.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.117 "is_configured": false, 00:26:52.117 "data_offset": 256, 00:26:52.117 "data_size": 7936 00:26:52.117 }, 00:26:52.117 { 00:26:52.117 "name": "BaseBdev2", 00:26:52.117 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:52.117 "is_configured": true, 00:26:52.117 "data_offset": 256, 00:26:52.117 "data_size": 7936 00:26:52.117 } 00:26:52.117 ] 00:26:52.117 }' 00:26:52.117 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@646 -- # local es=0 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:52.377 06:21:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:52.377 [2024-08-13 06:21:54.144692] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:52.377 [2024-08-13 06:21:54.144804] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:52.377 [2024-08-13 06:21:54.144820] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:52.377 request: 00:26:52.377 { 00:26:52.377 "base_bdev": "BaseBdev1", 00:26:52.377 "raid_bdev": "raid_bdev1", 00:26:52.377 "method": "bdev_raid_add_base_bdev", 00:26:52.377 "req_id": 1 00:26:52.377 } 00:26:52.377 Got JSON-RPC error response 00:26:52.377 response: 00:26:52.377 { 00:26:52.377 "code": -22, 00:26:52.377 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:52.377 } 00:26:52.637 06:21:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # es=1 00:26:52.637 06:21:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:26:52.637 06:21:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:26:52.637 06:21:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:26:52.637 06:21:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.575 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.835 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.835 "name": "raid_bdev1", 00:26:53.835 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:53.835 "strip_size_kb": 0, 00:26:53.835 "state": "online", 00:26:53.835 "raid_level": "raid1", 00:26:53.835 "superblock": true, 00:26:53.835 "num_base_bdevs": 2, 00:26:53.835 "num_base_bdevs_discovered": 1, 00:26:53.835 "num_base_bdevs_operational": 1, 00:26:53.835 "base_bdevs_list": [ 00:26:53.835 { 00:26:53.835 "name": null, 00:26:53.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.835 "is_configured": false, 00:26:53.835 "data_offset": 256, 00:26:53.835 "data_size": 7936 00:26:53.835 }, 00:26:53.835 { 00:26:53.835 "name": "BaseBdev2", 00:26:53.835 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:53.835 "is_configured": true, 00:26:53.835 "data_offset": 256, 00:26:53.835 "data_size": 7936 00:26:53.835 } 00:26:53.835 ] 00:26:53.835 }' 00:26:53.835 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.835 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.404 06:21:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.404 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.404 "name": "raid_bdev1", 00:26:54.404 "uuid": "bc37e0d3-1b3c-49a2-a802-963e850e3df8", 00:26:54.404 "strip_size_kb": 0, 00:26:54.404 "state": "online", 00:26:54.404 "raid_level": "raid1", 00:26:54.404 "superblock": true, 00:26:54.404 "num_base_bdevs": 2, 00:26:54.404 "num_base_bdevs_discovered": 1, 00:26:54.404 "num_base_bdevs_operational": 1, 00:26:54.404 "base_bdevs_list": [ 00:26:54.404 { 00:26:54.404 "name": null, 00:26:54.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.404 "is_configured": false, 00:26:54.404 "data_offset": 256, 00:26:54.404 "data_size": 7936 00:26:54.404 }, 00:26:54.404 { 00:26:54.404 "name": "BaseBdev2", 00:26:54.404 "uuid": "208240ae-a966-5cdd-81f5-476bf55e8e1b", 00:26:54.404 "is_configured": true, 00:26:54.404 "data_offset": 256, 00:26:54.404 "data_size": 7936 00:26:54.404 } 00:26:54.404 ] 00:26:54.404 }' 00:26:54.404 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.404 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:54.404 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 109234 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 109234 ']' 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 109234 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109234 00:26:54.664 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:54.664 killing process with pid 109234 00:26:54.664 Received shutdown signal, test time was about 60.000000 seconds 00:26:54.664 00:26:54.665 Latency(us) 00:26:54.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.665 =================================================================================================================== 00:26:54.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:54.665 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:54.665 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109234' 00:26:54.665 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 109234 00:26:54.665 [2024-08-13 06:21:56.277390] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:54.665 [2024-08-13 06:21:56.277483] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.665 [2024-08-13 06:21:56.277522] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.665 [2024-08-13 06:21:56.277531] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:26:54.665 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 109234 00:26:54.665 [2024-08-13 06:21:56.309309] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:54.925 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:26:54.925 00:26:54.925 real 0m26.321s 00:26:54.925 user 0m41.309s 00:26:54.925 sys 0m3.173s 00:26:54.925 ************************************ 00:26:54.925 END TEST raid_rebuild_test_sb_md_interleaved 00:26:54.925 ************************************ 00:26:54.925 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:54.925 06:21:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:54.925 06:21:56 bdev_raid -- bdev/bdev_raid.sh@994 -- # trap - EXIT 00:26:54.925 06:21:56 bdev_raid -- bdev/bdev_raid.sh@995 -- # cleanup 00:26:54.925 06:21:56 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 109234 ']' 00:26:54.925 06:21:56 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 109234 00:26:54.925 06:21:56 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:26:54.925 ************************************ 00:26:54.925 END TEST bdev_raid 00:26:54.925 ************************************ 00:26:54.925 00:26:54.925 real 20m37.675s 00:26:54.925 user 34m52.772s 00:26:54.925 sys 3m19.141s 00:26:54.925 06:21:56 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:54.925 06:21:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:54.925 06:21:56 -- spdk/autotest.sh@203 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:54.925 06:21:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:54.925 06:21:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:54.925 06:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:54.925 ************************************ 00:26:54.925 START TEST spdkcli_raid 00:26:54.925 ************************************ 00:26:54.925 06:21:56 spdkcli_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:55.185 * Looking for test storage... 00:26:55.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:26:55.185 06:21:56 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:26:55.185 06:21:56 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:55.185 06:21:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=110014 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:55.185 06:21:56 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 110014 00:26:55.185 06:21:56 spdkcli_raid -- common/autotest_common.sh@827 -- # '[' -z 110014 ']' 00:26:55.185 06:21:56 spdkcli_raid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.185 06:21:56 spdkcli_raid -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:55.186 06:21:56 spdkcli_raid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.186 06:21:56 spdkcli_raid -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:55.186 06:21:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:55.186 [2024-08-13 06:21:56.963662] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:26:55.186 [2024-08-13 06:21:56.963906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110014 ] 00:26:55.445 [2024-08-13 06:21:57.111066] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:55.445 [2024-08-13 06:21:57.158776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.446 [2024-08-13 06:21:57.158881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.015 06:21:57 spdkcli_raid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:56.015 06:21:57 spdkcli_raid -- common/autotest_common.sh@860 -- # return 0 00:26:56.015 06:21:57 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:26:56.015 06:21:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.015 06:21:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:56.275 06:21:57 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:26:56.275 06:21:57 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:56.275 06:21:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:56.275 06:21:57 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:56.275 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:56.275 ' 00:26:57.654 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:26:57.655 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:26:57.913 06:21:59 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:26:57.913 06:21:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.913 06:21:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:57.913 06:21:59 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:26:57.913 06:21:59 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:57.914 06:21:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:57.914 06:21:59 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:26:57.914 ' 00:26:58.852 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:26:58.852 06:22:00 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:26:58.852 06:22:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.852 06:22:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.111 06:22:00 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:26:59.111 06:22:00 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:59.111 06:22:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.111 06:22:00 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:26:59.111 06:22:00 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:26:59.681 06:22:01 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:26:59.681 06:22:01 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:26:59.681 06:22:01 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:26:59.681 06:22:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.681 06:22:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.681 06:22:01 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:26:59.681 06:22:01 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:59.681 06:22:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.681 06:22:01 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:26:59.681 ' 00:27:00.618 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:27:00.618 06:22:02 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:27:00.618 06:22:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.618 06:22:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:00.618 06:22:02 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:27:00.618 06:22:02 spdkcli_raid -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:00.618 06:22:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:00.618 06:22:02 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:27:00.618 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:27:00.618 ' 00:27:01.999 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:27:01.999 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:27:02.260 06:22:03 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:02.260 06:22:03 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 110014 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@946 -- # '[' -z 110014 ']' 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@950 -- # kill -0 110014 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@951 -- # uname 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110014 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110014' 00:27:02.260 killing process with pid 110014 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@965 -- # kill 110014 00:27:02.260 06:22:03 spdkcli_raid -- common/autotest_common.sh@970 -- # wait 110014 00:27:02.518 06:22:04 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:27:02.518 06:22:04 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 110014 ']' 00:27:02.518 06:22:04 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 110014 00:27:02.518 06:22:04 spdkcli_raid -- common/autotest_common.sh@946 -- # '[' -z 110014 ']' 00:27:02.518 06:22:04 spdkcli_raid -- common/autotest_common.sh@950 -- # kill -0 110014 00:27:02.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (110014) - No such process 00:27:02.778 06:22:04 spdkcli_raid -- common/autotest_common.sh@973 -- # echo 'Process with pid 110014 is not found' 00:27:02.778 Process with pid 110014 is not found 00:27:02.778 06:22:04 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:27:02.778 06:22:04 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:02.778 06:22:04 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:02.778 06:22:04 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:02.778 00:27:02.778 real 0m7.616s 00:27:02.778 user 0m16.210s 00:27:02.778 sys 0m1.084s 00:27:02.778 06:22:04 spdkcli_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:02.778 06:22:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:02.778 ************************************ 00:27:02.778 END TEST spdkcli_raid 00:27:02.778 ************************************ 00:27:02.778 06:22:04 -- spdk/autotest.sh@204 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:27:02.778 06:22:04 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:02.778 06:22:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:02.778 06:22:04 -- common/autotest_common.sh@10 -- # set +x 00:27:02.778 ************************************ 00:27:02.778 START TEST blockdev_raid5f 00:27:02.778 ************************************ 00:27:02.778 06:22:04 blockdev_raid5f -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:27:02.778 * Looking for test storage... 00:27:02.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:27:02.778 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=110260 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:02.779 06:22:04 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 110260 00:27:02.779 06:22:04 blockdev_raid5f -- common/autotest_common.sh@827 -- # '[' -z 110260 ']' 00:27:02.779 06:22:04 blockdev_raid5f -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.779 06:22:04 blockdev_raid5f -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:02.779 06:22:04 blockdev_raid5f -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.779 06:22:04 blockdev_raid5f -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:02.779 06:22:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.038 [2024-08-13 06:22:04.644066] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:03.038 [2024-08-13 06:22:04.644316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110260 ] 00:27:03.038 [2024-08-13 06:22:04.790858] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.298 [2024-08-13 06:22:04.839694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.885 06:22:05 blockdev_raid5f -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:03.885 06:22:05 blockdev_raid5f -- common/autotest_common.sh@860 -- # return 0 00:27:03.885 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:27:03.885 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:27:03.885 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 Malloc0 00:27:03.886 Malloc1 00:27:03.886 Malloc2 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@557 -- # xtrace_disable 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 06:22:05 blockdev_raid5f -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:27:03.886 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aacc223d-c089-4156-a5f7-6bc04a24c400"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aacc223d-c089-4156-a5f7-6bc04a24c400",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aacc223d-c089-4156-a5f7-6bc04a24c400",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "818e4b97-9702-4f09-b106-833f87962395",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4f0a7c90-20aa-4e47-87a7-593d3979808c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3072662d-5447-460c-8d27-20bddc695d36",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:27:04.149 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:27:04.149 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:27:04.149 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:27:04.149 06:22:05 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 110260 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@946 -- # '[' -z 110260 ']' 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@950 -- # kill -0 110260 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@951 -- # uname 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110260 00:27:04.149 killing process with pid 110260 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110260' 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@965 -- # kill 110260 00:27:04.149 06:22:05 blockdev_raid5f -- common/autotest_common.sh@970 -- # wait 110260 00:27:04.409 06:22:06 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:04.409 06:22:06 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:27:04.409 06:22:06 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:04.409 06:22:06 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:04.409 06:22:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:04.409 ************************************ 00:27:04.409 START TEST bdev_hello_world 00:27:04.409 ************************************ 00:27:04.409 06:22:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:27:04.668 [2024-08-13 06:22:06.256798] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:04.668 [2024-08-13 06:22:06.256945] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110294 ] 00:27:04.668 [2024-08-13 06:22:06.405851] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.668 [2024-08-13 06:22:06.450922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.928 [2024-08-13 06:22:06.642625] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:04.928 [2024-08-13 06:22:06.642807] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:27:04.928 [2024-08-13 06:22:06.642838] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:04.928 [2024-08-13 06:22:06.643364] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:04.928 [2024-08-13 06:22:06.643507] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:04.928 [2024-08-13 06:22:06.643529] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:04.928 [2024-08-13 06:22:06.643587] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:04.928 00:27:04.928 [2024-08-13 06:22:06.643619] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:05.187 00:27:05.187 real 0m0.724s 00:27:05.187 user 0m0.382s 00:27:05.187 sys 0m0.225s 00:27:05.187 ************************************ 00:27:05.187 END TEST bdev_hello_world 00:27:05.187 ************************************ 00:27:05.187 06:22:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:05.187 06:22:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:05.187 06:22:06 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:27:05.187 06:22:06 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:05.187 06:22:06 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:05.187 06:22:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:05.188 ************************************ 00:27:05.188 START TEST bdev_bounds 00:27:05.188 ************************************ 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=110325 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:05.188 Process bdevio pid: 110325 00:27:05.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 110325' 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 110325 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 110325 ']' 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:05.188 06:22:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:05.447 [2024-08-13 06:22:07.055374] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:05.447 [2024-08-13 06:22:07.055527] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110325 ] 00:27:05.447 [2024-08-13 06:22:07.204533] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:05.707 [2024-08-13 06:22:07.253724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.707 [2024-08-13 06:22:07.253872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.707 [2024-08-13 06:22:07.253951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.275 06:22:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:06.275 06:22:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:27:06.275 06:22:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:06.275 I/O targets: 00:27:06.275 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:27:06.275 00:27:06.275 00:27:06.275 CUnit - A unit testing framework for C - Version 2.1-3 00:27:06.275 http://cunit.sourceforge.net/ 00:27:06.275 00:27:06.275 00:27:06.275 Suite: bdevio tests on: raid5f 00:27:06.275 Test: blockdev write read block ...passed 00:27:06.275 Test: blockdev write zeroes read block ...passed 00:27:06.275 Test: blockdev write zeroes read no split ...passed 00:27:06.275 Test: blockdev write zeroes read split ...passed 00:27:06.535 Test: blockdev write zeroes read split partial ...passed 00:27:06.535 Test: blockdev reset ...passed 00:27:06.535 Test: blockdev write read 8 blocks ...passed 00:27:06.535 Test: blockdev write read size > 128k ...passed 00:27:06.535 Test: blockdev write read invalid size ...passed 00:27:06.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:06.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:06.535 Test: blockdev write read max offset ...passed 00:27:06.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:06.535 Test: blockdev writev readv 8 blocks ...passed 00:27:06.535 Test: blockdev writev readv 30 x 1block ...passed 00:27:06.535 Test: blockdev writev readv block ...passed 00:27:06.535 Test: blockdev writev readv size > 128k ...passed 00:27:06.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:06.535 Test: blockdev comparev and writev ...passed 00:27:06.535 Test: blockdev nvme passthru rw ...passed 00:27:06.535 Test: blockdev nvme passthru vendor specific ...passed 00:27:06.535 Test: blockdev nvme admin passthru ...passed 00:27:06.535 Test: blockdev copy ...passed 00:27:06.535 00:27:06.535 Run Summary: Type Total Ran Passed Failed Inactive 00:27:06.535 suites 1 1 n/a 0 0 00:27:06.535 tests 23 23 23 0 0 00:27:06.535 asserts 130 130 130 0 n/a 00:27:06.535 00:27:06.535 Elapsed time = 0.331 seconds 00:27:06.535 0 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 110325 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 110325 ']' 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 110325 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110325 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:06.535 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110325' 00:27:06.535 killing process with pid 110325 00:27:06.536 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@965 -- # kill 110325 00:27:06.536 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # wait 110325 00:27:06.796 06:22:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:27:06.796 00:27:06.796 real 0m1.468s 00:27:06.796 user 0m3.476s 00:27:06.796 sys 0m0.362s 00:27:06.796 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.796 06:22:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:06.796 ************************************ 00:27:06.796 END TEST bdev_bounds 00:27:06.796 ************************************ 00:27:06.796 06:22:08 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:27:06.796 06:22:08 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:06.796 06:22:08 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:06.796 06:22:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:06.796 ************************************ 00:27:06.796 START TEST bdev_nbd 00:27:06.796 ************************************ 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=110369 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 110369 /var/tmp/spdk-nbd.sock 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 110369 ']' 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.796 06:22:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:07.056 [2024-08-13 06:22:08.617661] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:07.056 [2024-08-13 06:22:08.617822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.056 [2024-08-13 06:22:08.769760] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.056 [2024-08-13 06:22:08.816406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:07.994 1+0 records in 00:27:07.994 1+0 records out 00:27:07.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554418 s, 7.4 MB/s 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:07.994 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:08.254 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:08.254 { 00:27:08.254 "nbd_device": "/dev/nbd0", 00:27:08.254 "bdev_name": "raid5f" 00:27:08.254 } 00:27:08.254 ]' 00:27:08.254 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:08.254 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:08.254 { 00:27:08.254 "nbd_device": "/dev/nbd0", 00:27:08.254 "bdev_name": "raid5f" 00:27:08.254 } 00:27:08.254 ]' 00:27:08.254 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:08.255 06:22:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:08.514 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:08.774 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:27:09.034 /dev/nbd0 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:09.034 1+0 records in 00:27:09.034 1+0 records out 00:27:09.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054812 s, 7.5 MB/s 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:09.034 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:09.294 { 00:27:09.294 "nbd_device": "/dev/nbd0", 00:27:09.294 "bdev_name": "raid5f" 00:27:09.294 } 00:27:09.294 ]' 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:09.294 { 00:27:09.294 "nbd_device": "/dev/nbd0", 00:27:09.294 "bdev_name": "raid5f" 00:27:09.294 } 00:27:09.294 ]' 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:09.294 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:09.295 256+0 records in 00:27:09.295 256+0 records out 00:27:09.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138725 s, 75.6 MB/s 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:09.295 256+0 records in 00:27:09.295 256+0 records out 00:27:09.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285043 s, 36.8 MB/s 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:09.295 06:22:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:09.555 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:09.814 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:10.074 malloc_lvol_verify 00:27:10.074 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:10.074 c52ff611-e863-4773-ae0d-d9aa80dad7bd 00:27:10.074 06:22:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:10.334 7b74daca-060f-4f01-84c1-7f168e7f4409 00:27:10.334 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:10.594 /dev/nbd0 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:10.594 mke2fs 1.47.0 (5-Feb-2023) 00:27:10.594 Discarding device blocks: 0/4096 done 00:27:10.594 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:10.594 00:27:10.594 Allocating group tables: 0/1 done 00:27:10.594 Writing inode tables: 0/1 done 00:27:10.594 Creating journal (1024 blocks): done 00:27:10.594 Writing superblocks and filesystem accounting information: 0/1 done 00:27:10.594 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.594 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 110369 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 110369 ']' 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 110369 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110369 00:27:10.854 killing process with pid 110369 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110369' 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@965 -- # kill 110369 00:27:10.854 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # wait 110369 00:27:11.115 06:22:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:27:11.115 00:27:11.115 real 0m4.304s 00:27:11.115 user 0m6.215s 00:27:11.115 sys 0m1.261s 00:27:11.115 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:11.115 ************************************ 00:27:11.115 END TEST bdev_nbd 00:27:11.115 ************************************ 00:27:11.115 06:22:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:11.115 06:22:12 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:27:11.115 06:22:12 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:27:11.115 06:22:12 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:27:11.115 06:22:12 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:27:11.115 06:22:12 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:11.115 06:22:12 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:11.115 06:22:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:11.115 ************************************ 00:27:11.115 START TEST bdev_fio 00:27:11.115 ************************************ 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:27:11.115 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:27:11.115 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:27:11.375 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:11.375 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:27:11.375 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:27:11.375 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:27:11.375 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:27:11.375 06:22:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:27:11.375 06:22:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:11.375 06:22:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:27:11.375 06:22:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:11.375 06:22:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:27:11.375 06:22:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:11.376 ************************************ 00:27:11.376 START TEST bdev_fio_rw_verify 00:27:11.376 ************************************ 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:11.376 06:22:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:11.636 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:11.636 fio-3.35 00:27:11.636 Starting 1 thread 00:27:23.856 00:27:23.856 job_raid5f: (groupid=0, jobs=1): err= 0: pid=110560: Tue Aug 13 06:22:23 2024 00:27:23.856 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:27:23.856 slat (nsec): min=16918, max=63707, avg=18976.70, stdev=2054.13 00:27:23.856 clat (usec): min=11, max=317, avg=130.58, stdev=44.89 00:27:23.856 lat (usec): min=30, max=340, avg=149.55, stdev=45.16 00:27:23.856 clat percentiles (usec): 00:27:23.856 | 50.000th=[ 133], 99.000th=[ 215], 99.900th=[ 235], 99.990th=[ 269], 00:27:23.856 | 99.999th=[ 310] 00:27:23.856 write: IOPS=12.9k, BW=50.4MiB/s (52.8MB/s)(497MiB/9875msec); 0 zone resets 00:27:23.856 slat (usec): min=7, max=344, avg=16.36, stdev= 3.97 00:27:23.856 clat (usec): min=59, max=1841, avg=299.55, stdev=43.50 00:27:23.856 lat (usec): min=75, max=2100, avg=315.90, stdev=44.68 00:27:23.856 clat percentiles (usec): 00:27:23.856 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 635], 99.990th=[ 1418], 00:27:23.856 | 99.999th=[ 1745] 00:27:23.856 bw ( KiB/s): min=48848, max=54768, per=98.98%, avg=51057.68, stdev=1396.77, samples=19 00:27:23.856 iops : min=12212, max=13692, avg=12764.42, stdev=349.19, samples=19 00:27:23.856 lat (usec) : 20=0.01%, 50=0.01%, 100=14.73%, 250=40.06%, 500=45.12% 00:27:23.856 lat (usec) : 750=0.05%, 1000=0.02% 00:27:23.856 lat (msec) : 2=0.01% 00:27:23.856 cpu : usr=98.86%, sys=0.48%, ctx=41, majf=0, minf=13168 00:27:23.856 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.856 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.856 issued rwts: total=123403,127342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.856 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:23.856 00:27:23.856 Run status group 0 (all jobs): 00:27:23.856 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:27:23.856 WRITE: bw=50.4MiB/s (52.8MB/s), 50.4MiB/s-50.4MiB/s (52.8MB/s-52.8MB/s), io=497MiB (522MB), run=9875-9875msec 00:27:23.856 ----------------------------------------------------- 00:27:23.856 Suppressions used: 00:27:23.856 count bytes template 00:27:23.856 1 7 /usr/src/fio/parse.c 00:27:23.856 122 11712 /usr/src/fio/iolog.c 00:27:23.856 1 8 libtcmalloc_minimal.so 00:27:23.856 1 904 libcrypto.so 00:27:23.856 ----------------------------------------------------- 00:27:23.856 00:27:23.856 00:27:23.856 real 0m11.232s 00:27:23.856 user 0m11.593s 00:27:23.856 sys 0m0.678s 00:27:23.856 ************************************ 00:27:23.856 END TEST bdev_fio_rw_verify 00:27:23.856 ************************************ 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aacc223d-c089-4156-a5f7-6bc04a24c400"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aacc223d-c089-4156-a5f7-6bc04a24c400",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aacc223d-c089-4156-a5f7-6bc04a24c400",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "818e4b97-9702-4f09-b106-833f87962395",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4f0a7c90-20aa-4e47-87a7-593d3979808c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3072662d-5447-460c-8d27-20bddc695d36",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:23.856 /home/vagrant/spdk_repo/spdk 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:27:23.856 00:27:23.856 real 0m11.554s 00:27:23.856 user 0m11.728s 00:27:23.856 sys 0m0.820s 00:27:23.856 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:23.857 ************************************ 00:27:23.857 END TEST bdev_fio 00:27:23.857 ************************************ 00:27:23.857 06:22:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:23.857 06:22:24 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:23.857 06:22:24 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:23.857 06:22:24 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:27:23.857 06:22:24 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:23.857 06:22:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:23.857 ************************************ 00:27:23.857 START TEST bdev_verify 00:27:23.857 ************************************ 00:27:23.857 06:22:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:23.857 [2024-08-13 06:22:24.618289] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:23.857 [2024-08-13 06:22:24.618525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110704 ] 00:27:23.857 [2024-08-13 06:22:24.766815] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:23.857 [2024-08-13 06:22:24.812245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.857 [2024-08-13 06:22:24.812369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.857 Running I/O for 5 seconds... 00:27:29.140 00:27:29.140 Latency(us) 00:27:29.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.140 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:29.140 Verification LBA range: start 0x0 length 0x2000 00:27:29.140 raid5f : 5.03 5620.61 21.96 0.00 0.00 34188.59 255.78 35944.64 00:27:29.140 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:29.140 Verification LBA range: start 0x2000 length 0x2000 00:27:29.140 raid5f : 5.02 7132.32 27.86 0.00 0.00 26931.25 116.26 41210.41 00:27:29.140 =================================================================================================================== 00:27:29.140 Total : 12752.93 49.82 0.00 0.00 30131.71 116.26 41210.41 00:27:29.140 ************************************ 00:27:29.140 END TEST bdev_verify 00:27:29.140 00:27:29.140 real 0m5.756s 00:27:29.140 user 0m10.695s 00:27:29.140 sys 0m0.229s 00:27:29.140 06:22:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:29.140 06:22:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:29.140 ************************************ 00:27:29.140 06:22:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:29.140 06:22:30 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:27:29.140 06:22:30 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.140 06:22:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:29.140 ************************************ 00:27:29.140 START TEST bdev_verify_big_io 00:27:29.140 ************************************ 00:27:29.140 06:22:30 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:29.140 [2024-08-13 06:22:30.446623] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:29.140 [2024-08-13 06:22:30.446779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110786 ] 00:27:29.140 [2024-08-13 06:22:30.593056] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:29.140 [2024-08-13 06:22:30.648327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.140 [2024-08-13 06:22:30.648458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.140 Running I/O for 5 seconds... 00:27:34.419 00:27:34.419 Latency(us) 00:27:34.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.419 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:34.419 Verification LBA range: start 0x0 length 0x200 00:27:34.419 raid5f : 5.28 361.02 22.56 0.00 0.00 8773830.82 224.48 369977.91 00:27:34.419 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:34.419 Verification LBA range: start 0x200 length 0x200 00:27:34.419 raid5f : 5.21 463.41 28.96 0.00 0.00 6910078.56 253.99 304041.25 00:27:34.419 =================================================================================================================== 00:27:34.419 Total : 824.44 51.53 0.00 0.00 7732322.20 224.48 369977.91 00:27:34.679 00:27:34.679 real 0m6.016s 00:27:34.679 user 0m11.192s 00:27:34.679 sys 0m0.247s 00:27:34.679 ************************************ 00:27:34.679 END TEST bdev_verify_big_io 00:27:34.679 ************************************ 00:27:34.679 06:22:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:34.679 06:22:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.679 06:22:36 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:34.679 06:22:36 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:27:34.679 06:22:36 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:34.679 06:22:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:34.679 ************************************ 00:27:34.679 START TEST bdev_write_zeroes 00:27:34.679 ************************************ 00:27:34.680 06:22:36 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:34.940 [2024-08-13 06:22:36.536333] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:34.940 [2024-08-13 06:22:36.536487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110868 ] 00:27:34.940 [2024-08-13 06:22:36.683355] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.199 [2024-08-13 06:22:36.736337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.199 Running I/O for 1 seconds... 00:27:36.180 00:27:36.180 Latency(us) 00:27:36.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.180 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:36.180 raid5f : 1.00 29588.62 115.58 0.00 0.00 4311.57 1552.54 6238.80 00:27:36.180 =================================================================================================================== 00:27:36.180 Total : 29588.62 115.58 0.00 0.00 4311.57 1552.54 6238.80 00:27:36.439 00:27:36.439 real 0m1.734s 00:27:36.439 user 0m1.383s 00:27:36.439 sys 0m0.228s 00:27:36.439 06:22:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.439 ************************************ 00:27:36.439 END TEST bdev_write_zeroes 00:27:36.439 ************************************ 00:27:36.439 06:22:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:36.699 06:22:38 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:36.699 06:22:38 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:27:36.699 06:22:38 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.699 06:22:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:36.699 ************************************ 00:27:36.699 START TEST bdev_json_nonenclosed 00:27:36.699 ************************************ 00:27:36.699 06:22:38 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:36.699 [2024-08-13 06:22:38.347951] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:36.699 [2024-08-13 06:22:38.348229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110904 ] 00:27:36.959 [2024-08-13 06:22:38.495890] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.959 [2024-08-13 06:22:38.552206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.959 [2024-08-13 06:22:38.552317] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:36.959 [2024-08-13 06:22:38.552351] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:36.959 [2024-08-13 06:22:38.552362] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:36.959 00:27:36.959 real 0m0.411s 00:27:36.959 user 0m0.179s 00:27:36.959 sys 0m0.127s 00:27:36.959 06:22:38 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.959 ************************************ 00:27:36.959 END TEST bdev_json_nonenclosed 00:27:36.959 ************************************ 00:27:36.959 06:22:38 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:36.959 06:22:38 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:36.959 06:22:38 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:27:36.959 06:22:38 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.959 06:22:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:36.959 ************************************ 00:27:36.959 START TEST bdev_json_nonarray 00:27:36.959 ************************************ 00:27:36.959 06:22:38 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:37.219 [2024-08-13 06:22:38.828115] Starting SPDK v24.09-pre git sha1 7c739692e / DPDK 22.11.4 initialization... 00:27:37.219 [2024-08-13 06:22:38.828245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110930 ] 00:27:37.219 [2024-08-13 06:22:38.975667] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.479 [2024-08-13 06:22:39.030143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.479 [2024-08-13 06:22:39.030249] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:37.479 [2024-08-13 06:22:39.030278] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:37.479 [2024-08-13 06:22:39.030288] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:37.479 00:27:37.479 real 0m0.406s 00:27:37.479 user 0m0.172s 00:27:37.479 sys 0m0.129s 00:27:37.480 06:22:39 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:37.480 06:22:39 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:37.480 ************************************ 00:27:37.480 END TEST bdev_json_nonarray 00:27:37.480 ************************************ 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:27:37.480 06:22:39 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:27:37.480 ************************************ 00:27:37.480 END TEST blockdev_raid5f 00:27:37.480 ************************************ 00:27:37.480 00:27:37.480 real 0m34.828s 00:27:37.480 user 0m47.324s 00:27:37.480 sys 0m4.688s 00:27:37.480 06:22:39 blockdev_raid5f -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:37.480 06:22:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:37.740 06:22:39 -- spdk/autotest.sh@207 -- # uname -s 00:27:37.740 06:22:39 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@269 -- # timing_exit lib 00:27:37.740 06:22:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.740 06:22:39 -- common/autotest_common.sh@10 -- # set +x 00:27:37.740 06:22:39 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@285 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@323 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@363 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@367 -- # '[' 0 -eq 1 ']' 00:27:37.740 06:22:39 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@382 -- # [[ 0 -eq 1 ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@386 -- # [[ '' -eq 1 ]] 00:27:37.740 06:22:39 -- spdk/autotest.sh@391 -- # trap - SIGINT SIGTERM EXIT 00:27:37.740 06:22:39 -- spdk/autotest.sh@393 -- # timing_enter post_cleanup 00:27:37.740 06:22:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:37.740 06:22:39 -- common/autotest_common.sh@10 -- # set +x 00:27:37.740 06:22:39 -- spdk/autotest.sh@394 -- # autotest_cleanup 00:27:37.740 06:22:39 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:27:37.740 06:22:39 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:27:37.740 06:22:39 -- common/autotest_common.sh@10 -- # set +x 00:27:39.649 INFO: APP EXITING 00:27:39.649 INFO: killing all VMs 00:27:39.909 INFO: killing vhost app 00:27:39.909 INFO: EXIT DONE 00:27:40.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:40.427 Waiting for block devices as requested 00:27:40.427 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:40.427 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:41.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:41.366 Cleaning 00:27:41.366 Removing: /var/run/dpdk/spdk0/config 00:27:41.366 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:41.366 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:41.366 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:41.366 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:41.366 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:41.366 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:41.366 Removing: /dev/shm/spdk_tgt_trace.pid68039 00:27:41.366 Removing: /var/run/dpdk/spdk0 00:27:41.366 Removing: /var/run/dpdk/spdk_pid100163 00:27:41.366 Removing: /var/run/dpdk/spdk_pid103000 00:27:41.366 Removing: /var/run/dpdk/spdk_pid103775 00:27:41.366 Removing: /var/run/dpdk/spdk_pid104317 00:27:41.366 Removing: /var/run/dpdk/spdk_pid105566 00:27:41.366 Removing: /var/run/dpdk/spdk_pid106039 00:27:41.626 Removing: /var/run/dpdk/spdk_pid107161 00:27:41.626 Removing: /var/run/dpdk/spdk_pid107634 00:27:41.626 Removing: /var/run/dpdk/spdk_pid108755 00:27:41.626 Removing: /var/run/dpdk/spdk_pid109234 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110014 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110260 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110294 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110325 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110547 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110704 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110786 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110868 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110904 00:27:41.626 Removing: /var/run/dpdk/spdk_pid110930 00:27:41.626 Removing: /var/run/dpdk/spdk_pid67884 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68039 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68233 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68320 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68349 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68455 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68473 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68637 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68697 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68774 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68866 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68933 00:27:41.626 Removing: /var/run/dpdk/spdk_pid68978 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69009 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69066 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69172 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69584 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69637 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69689 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69699 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69763 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69779 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69848 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69864 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69906 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69924 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69966 00:27:41.626 Removing: /var/run/dpdk/spdk_pid69984 00:27:41.626 Removing: /var/run/dpdk/spdk_pid70114 00:27:41.626 Removing: /var/run/dpdk/spdk_pid70145 00:27:41.626 Removing: /var/run/dpdk/spdk_pid70220 00:27:41.626 Removing: /var/run/dpdk/spdk_pid71660 00:27:41.626 Removing: /var/run/dpdk/spdk_pid71999 00:27:41.626 Removing: /var/run/dpdk/spdk_pid72163 00:27:41.626 Removing: /var/run/dpdk/spdk_pid73013 00:27:41.626 Removing: /var/run/dpdk/spdk_pid73346 00:27:41.626 Removing: /var/run/dpdk/spdk_pid73517 00:27:41.626 Removing: /var/run/dpdk/spdk_pid74354 00:27:41.626 Removing: /var/run/dpdk/spdk_pid74840 00:27:41.626 Removing: /var/run/dpdk/spdk_pid75005 00:27:41.886 Removing: /var/run/dpdk/spdk_pid76975 00:27:41.886 Removing: /var/run/dpdk/spdk_pid77414 00:27:41.886 Removing: /var/run/dpdk/spdk_pid77584 00:27:41.886 Removing: /var/run/dpdk/spdk_pid79548 00:27:41.886 Removing: /var/run/dpdk/spdk_pid79983 00:27:41.886 Removing: /var/run/dpdk/spdk_pid80161 00:27:41.886 Removing: /var/run/dpdk/spdk_pid82128 00:27:41.886 Removing: /var/run/dpdk/spdk_pid82818 00:27:41.886 Removing: /var/run/dpdk/spdk_pid82992 00:27:41.886 Removing: /var/run/dpdk/spdk_pid85188 00:27:41.887 Removing: /var/run/dpdk/spdk_pid85687 00:27:41.887 Removing: /var/run/dpdk/spdk_pid85864 00:27:41.887 Removing: /var/run/dpdk/spdk_pid88052 00:27:41.887 Removing: /var/run/dpdk/spdk_pid88551 00:27:41.887 Removing: /var/run/dpdk/spdk_pid88734 00:27:41.887 Removing: /var/run/dpdk/spdk_pid90927 00:27:41.887 Removing: /var/run/dpdk/spdk_pid91721 00:27:41.887 Removing: /var/run/dpdk/spdk_pid91900 00:27:41.887 Removing: /var/run/dpdk/spdk_pid92083 00:27:41.887 Removing: /var/run/dpdk/spdk_pid92554 00:27:41.887 Removing: /var/run/dpdk/spdk_pid93392 00:27:41.887 Removing: /var/run/dpdk/spdk_pid93818 00:27:41.887 Removing: /var/run/dpdk/spdk_pid94616 00:27:41.887 Removing: /var/run/dpdk/spdk_pid95101 00:27:41.887 Removing: /var/run/dpdk/spdk_pid95950 00:27:41.887 Removing: /var/run/dpdk/spdk_pid96402 00:27:41.887 Removing: /var/run/dpdk/spdk_pid99016 00:27:41.887 Removing: /var/run/dpdk/spdk_pid99696 00:27:41.887 Clean 00:27:41.887 06:22:43 -- common/autotest_common.sh@1447 -- # return 0 00:27:41.887 06:22:43 -- spdk/autotest.sh@395 -- # timing_exit post_cleanup 00:27:41.887 06:22:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.887 06:22:43 -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 06:22:43 -- spdk/autotest.sh@397 -- # timing_exit autotest 00:27:42.147 06:22:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.147 06:22:43 -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 06:22:43 -- spdk/autotest.sh@398 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:42.147 06:22:43 -- spdk/autotest.sh@400 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:42.147 06:22:43 -- spdk/autotest.sh@400 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:42.147 06:22:43 -- spdk/autotest.sh@402 -- # hash lcov 00:27:42.147 06:22:43 -- spdk/autotest.sh@402 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:42.147 06:22:43 -- spdk/autotest.sh@404 -- # hostname 00:27:42.147 06:22:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:42.406 geninfo: WARNING: invalid characters removed from testname! 00:28:08.962 06:23:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:09.901 06:23:11 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:11.809 06:23:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:13.716 06:23:15 -- spdk/autotest.sh@408 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:15.623 06:23:17 -- spdk/autotest.sh@409 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:17.531 06:23:19 -- spdk/autotest.sh@410 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:19.439 06:23:21 -- spdk/autotest.sh@411 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:19.439 06:23:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:19.439 06:23:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:19.439 06:23:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.439 06:23:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.439 06:23:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.439 06:23:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.439 06:23:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.439 06:23:21 -- paths/export.sh@5 -- $ export PATH 00:28:19.439 06:23:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.439 06:23:21 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:19.439 06:23:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:19.439 06:23:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723530201.XXXXXX 00:28:19.439 06:23:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723530201.OfaU0e 00:28:19.439 06:23:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:19.439 06:23:21 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:28:19.439 06:23:21 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:28:19.439 06:23:21 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:28:19.439 06:23:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:19.439 06:23:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:19.439 06:23:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:19.439 06:23:21 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:28:19.439 06:23:21 -- common/autotest_common.sh@10 -- $ set +x 00:28:19.439 06:23:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:28:19.439 06:23:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:19.439 06:23:21 -- pm/common@17 -- $ local monitor 00:28:19.440 06:23:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:19.440 06:23:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:19.440 06:23:21 -- pm/common@25 -- $ sleep 1 00:28:19.440 06:23:21 -- pm/common@21 -- $ date +%s 00:28:19.440 06:23:21 -- pm/common@21 -- $ date +%s 00:28:19.440 06:23:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723530201 00:28:19.440 06:23:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723530201 00:28:19.699 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723530201_collect-cpu-load.pm.log 00:28:19.699 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723530201_collect-vmstat.pm.log 00:28:20.639 06:23:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:20.639 06:23:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:20.639 06:23:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:20.639 06:23:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:20.639 06:23:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:20.639 06:23:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:20.639 06:23:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:20.639 06:23:22 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:20.639 06:23:22 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:20.639 06:23:22 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:20.639 06:23:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:20.639 06:23:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:20.639 06:23:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:20.639 06:23:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:20.639 06:23:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:20.639 06:23:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:20.639 06:23:22 -- pm/common@44 -- $ pid=112416 00:28:20.639 06:23:22 -- pm/common@50 -- $ kill -TERM 112416 00:28:20.639 06:23:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:20.639 06:23:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:20.639 06:23:22 -- pm/common@44 -- $ pid=112418 00:28:20.639 06:23:22 -- pm/common@50 -- $ kill -TERM 112418 00:28:20.639 + [[ -n 6155 ]] 00:28:20.639 + sudo kill 6155 00:28:20.650 [Pipeline] } 00:28:20.666 [Pipeline] // timeout 00:28:20.673 [Pipeline] } 00:28:20.687 [Pipeline] // stage 00:28:20.693 [Pipeline] } 00:28:20.708 [Pipeline] // catchError 00:28:20.719 [Pipeline] stage 00:28:20.721 [Pipeline] { (Stop VM) 00:28:20.735 [Pipeline] sh 00:28:21.084 + vagrant halt 00:28:23.005 ==> default: Halting domain... 00:28:31.147 [Pipeline] sh 00:28:31.431 + vagrant destroy -f 00:28:33.968 ==> default: Removing domain... 00:28:33.979 [Pipeline] sh 00:28:34.261 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:28:34.269 [Pipeline] } 00:28:34.283 [Pipeline] // stage 00:28:34.288 [Pipeline] } 00:28:34.301 [Pipeline] // dir 00:28:34.306 [Pipeline] } 00:28:34.320 [Pipeline] // wrap 00:28:34.326 [Pipeline] } 00:28:34.338 [Pipeline] // catchError 00:28:34.346 [Pipeline] stage 00:28:34.348 [Pipeline] { (Epilogue) 00:28:34.360 [Pipeline] sh 00:28:34.643 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:38.850 [Pipeline] catchError 00:28:38.851 [Pipeline] { 00:28:38.864 [Pipeline] sh 00:28:39.148 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:39.407 Artifacts sizes are good 00:28:39.417 [Pipeline] } 00:28:39.431 [Pipeline] // catchError 00:28:39.442 [Pipeline] archiveArtifacts 00:28:39.449 Archiving artifacts 00:28:39.580 [Pipeline] cleanWs 00:28:39.595 [WS-CLEANUP] Deleting project workspace... 00:28:39.595 [WS-CLEANUP] Deferred wipeout is used... 00:28:39.619 [WS-CLEANUP] done 00:28:39.621 [Pipeline] } 00:28:39.636 [Pipeline] // stage 00:28:39.641 [Pipeline] } 00:28:39.656 [Pipeline] // node 00:28:39.661 [Pipeline] End of Pipeline 00:28:39.702 Finished: SUCCESS